forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
1YTF7Try7H | Implicit Bridge Consistency Distillation for One-Step Unpaired Image Translation | [
"Suhyeon Lee",
"Kwanyoung Kim",
"Jong Chul Ye"
] | Recently, diffusion models have been extensively studied as powerful generative tools for image translation. However, the existing diffusion model-based image translation approaches often suffer from several limitations: 1) slow inference due to iterative denoising, 2) the necessity for paired training data, or 3) constraints from learning only one-way translation paths. To mitigate these limitations, here we introduce a novel framework, called Implicit Bridge Consistency Distillation (IBCD), that extends consistency distillation with a diffusion implicit bridge model that connects PF-ODE trajectories from any distribution to another one. Moreover, to address the challenges associated with distillation errors from consistency distillation, we introduce two unique improvements: Distribution Matching for Consistency Distillation (DMCD) and distillation-difficulty adaptive weighting method. Experimental results confirm that IBCD for bidirectional translation can achieve state-of-the-art performance on benchmark datasets in just one step generation. | [
"image translation",
"consistency distillation",
"unpaired",
"one-step",
"diffusion models"
] | Reject | https://openreview.net/pdf?id=1YTF7Try7H | https://openreview.net/forum?id=1YTF7Try7H | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"y6SQrT528K",
"q2ehrzvMZu",
"kyGsMTlBvR",
"jadNCy0p8M",
"bLP29Vqau7",
"XnaSFXV0xZ",
"VqSYmGj7pu",
"TuvpGAomGv",
"Tl3hG7clMn",
"NyQTwviwRx",
"Jzp4RAj1Vf",
"Ixp0XXDjsU",
"E0gA7KUvnA",
"CvhzGVz4MW",
"BGm2L2G6KT",
"6UTebhNztI",
"5ycojK88cv",
"4uovAVabac",
"48ec96Z4Ui"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732507057524,
1732497882515,
1732261049612,
1737523576175,
1732506748310,
1732499721433,
1732693290272,
1734756938898,
1732609347984,
1732587689242,
1730722985978,
1732260905667,
1729944659412,
1730465603799,
1732792282840,
1732261187624,
1732260718147,
1732261212468,
1732499624082
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3440/Reviewer_etdT"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Area_Chair_117y"
],
[
"ICLR.cc/2025/Conference/Submission3440/Reviewer_CDyb"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Reviewer_UgkH"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Reviewer_etdT"
],
[
"ICLR.cc/2025/Conference/Submission3440/Reviewer_CDyb"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3440/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for your kind words and for taking the time to review our work. We appreciate your understanding and the increase in your score.\\n\\nAs you suggested, we will include the results for Q1 in the revised manuscript. We also thank you for the insightful suggestion regarding diversity. \\n\\nThank you again for your valuable feedback.\"}",
"{\"comment\": \"Dear reviewer UgkH,\\n\\nWe've carefully considered your valuable feedback and have made the following revisions to our manuscript and responses to your questions:\\n\\n1. Clarified our contributions and the determinism of the inference process.\\n2. Clarified the ablation studies.\\n3. Added diversity evaluation (density-coverage) results.\\n4. Added extra failure cases.\\n\\nAs the discussion period deadline is approach fast, we would appreciate it if you could provide your feedback whether our revision and rebuttal have fully addressed your concerns.\\n\\nWe appreciate your time and consideration.\\n\\nBest regards, Authors\"}",
"{\"comment\": \"**W1. Missing comparison of the results for bidirectional translation.**\\n\\nWe appreciate the reviewer's suggestion to include quantitative comparative experiments on bidirectional translation. In response, we have added a comprehensive evaluation in Appendix D.6, where we conduct opposite translation (Dog\\u2192Cat, Dog\\u2192Wild, Female\\u2192Male) and cycle translation (Cat\\u2192Dog\\u2192Cat, Wild\\u2192Dog\\u2192Wild, Male\\u2192Female\\u2192Male) tasks. While bidirectional models and public checkpoints are limited, our results in **Table 6, Figures 12 and 13** demonstrate that our model's bidirectional performance is on par with its unidirectional capabilities.\\n\\n**(Partial) Table 6: Quantitative comparison of unpaired image-to-image translation tasks (opposite & cycle translation)**\\n| Task | Method | FID $\\\\downarrow$ | PSNR $\\\\uparrow$ | SSIM $\\\\uparrow$ | Density $\\\\uparrow$ | Coverage $\\\\uparrow$ |\\n|---------------|----------------|:---------------:|:---------------:|:------------------:|:----------------:|:----------------:|\\n|Dog$\\\\rightarrow$Cat|StarGAN v2 | 37.73 | 16.02 | 0.399 | 1.336 | 0.778 |\\n||CycleDiffusion | 40.45 | 17.83 | 0.493 | 1.064 | 0.774 |\\n|| **IBCD (Ours)** | 28.99 | **19.10** | **0.695** | 1.699 | 0.894 |\\n|| **IBCD$\\\\dagger$ (Ours)** | **28.41** | 17.40 | 0.653 | **2.112** | **0.920** |\\n|Cat$\\\\rightarrow$Dog$\\\\rightarrow$Cat|StarGAN v2 | 30.53 | 16.30 | 0.382 | 1.717 | 0.890 |\\n|| CycleDiffusion | 39.59 | 19.01 | 0.434 | 0.731 | 0.676 |\\n|| **IBCD (Ours)** | **22.42** | **22.35** | **0.767** | 1.322 | **0.992** |\\n|| **IBCD$\\\\dagger$ (Ours)** | 24.03 | 20.28 | 0.724 | **1.749** | 0.988 |\\n|...|...|...|...|...|...|...|\\n\\n\\n**W2. Missing comparison of computation cost with the existing methods to show the efficiency of the proposed method.**\\n\\nPer your suggestion, we have conducted additional experiments comparing the actual inference time of our method with major open-source baselines. As shown in **(new) Table 5 and Appendix D.4**, our methodology demonstrates a significant advantage in inference computational complexity.\\n\\n**(Partial) Table 5: Quantitative comparison of model inference times.**\\n| Method |...| Time [s/img] $\\\\downarrow$ | Relative Time $\\\\downarrow$ |\\n|----------------|:-:|:------------:|:-------------:|\\n| StarGan v2 |...| 0.058 | 5.5 |\\n| CUT |...| 0.068 | 6.4 |\\n| UNSB |...| 0.104 | 9.9 |\\n| ILVR |...| 12.915 | 1224.2 |\\n| SDEdit |...| 6.378 | 604.5 |\\n| EGSDE |...| 15.385 | 1458.3 |\\n| CycleDiffusion |...| 26.032 | 2467.5 |\\n| DDIB (Teacher) |...| 0.965 | 90.6 |\\n| **IBCD (Ours)**|...| **0.011** | **1** |\\n\\n\\n**W3. The results in Table 3 show the model which added DMCD loss, cycle loss and adaptive DMCD degrades the performance in terms of PSNR and SSIM compared to the method using IBCD only.**\\n\\nTo clarify the comparison, we've updated Table 3 to benchmark against the lowest FID achievable by each individual component. Additionally, we've introduced a new metric, PSNR-Teacher, which uses the DDIB teacher's output as a PSNR ground truth. This metric allows us to assess the resolution of distillation error, a primary focus of our auxiliary loss approach.\\n\\nOur results demonstrate that each added component consistently reduces FID beyond the lower bound achievable by vanilla IBCD, while effectively mitigating the inherent PSNR trade-off and minimizing distillation error. Notably, updated Figure 6 highlights that even under identical PSNR conditions, the application of auxiliary losses consistently yields lower (improved) FID scores or vice versa.\\n\\n**Table 3: Quantitative ablation study results in the Cat\\u2192Dog task under the lowest FID.**\\n| Component | FID $\\\\downarrow$ | PSNR-teacher $\\\\uparrow$ | PSNR-source $\\\\uparrow$ |\\n|----------------------|:----------------:|:-----------------------:|:----------------------:|\\n| IBCD only | 48.12 | 18.27 | 19.02 |\\n| + DMCD | 44.40 | 17.95 | 16.80 |\\n| + DMCD & Cycle | 44.31 | 18.22 | 17.19 |\\n| + adap. DMCD & Cycle | 44.69 | 18.97 | 18.04 |\\n\\n\\n**W4. The zero in the first row of Eq. (6) might be $\\\\chi_A \\\\cap \\\\chi_B$.**\\n\\nThanks for your careful reading. Typo fixed.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thanks for the authors for the efforts and feedbacks, I think most of my questions are addressed by the rebuttal, so I raise my score accordingly. I would appreciate if the authors could add the results in Q1. in the revision.\\n\\nBesides, as for the diversity or degraded stochasticity mentioned by Reviewer UgkH, I think it could be a solution to add a random perturbation at $\\\\sigma_{max}$, *i.e.*, manually increase stochasticity at the intersection between two domains.\"}",
"{\"comment\": \"Dear reviewer CDyb,\\n\\nWe've carefully considered your valuable feedback and have made the following revisions to our manuscript and responses to your questions:\\n\\n1. Added bidirectional translation evaluation results.\\n2. Added inference computational cost comparison results.\\n3. Clarified ablation studies.\\n\\nAs the discussion period deadline is approach fast, we would appreciate it if you could provide your feedback whether our revision and rebuttal have fully addressed your concerns.\\n\\nWe appreciate your time and consideration.\\n\\nBest regards, Authors\"}",
"{\"comment\": \"We appreciate the reviewer's acknowledgment of our efforts to address the initial concerns. While our framework leverages multiple techniques, it does so in a novel and non-trivial manner, pushing the boundaries of current methodologies. Our key contributions are as follows:\\n\\n1. **Novel Framework**: Our model uniquely satisfies four critical properties\\u2014*one-step, unpaired, bidirectional, and non-discriminator*\\u2014simultaneously. To our knowledge, no prior work has achieved this combination.\\n\\n2. **Technical Innovations**: Extending consistency distillation to *bidirectional diffusion bridge* models required significant technical advancements. Our contributions include:\\n\\n- Adapting consistency distillation to bidirectional & bridge trajectories by innovating on timestep design, training schemes, boundary conditions, and model parametrization.\\n\\n- Introducing auxiliary losses (e.g., DMCD, adaptive DMCD, bidirectional cycle loss) that were uniquely integrated and modified for our IBCD framework to reduce distillation loss, and enhance reality and fidelity.\\n\\n3. **Validated Superiority**: Extensive experiments, ranging from toy to many high-dimensional datasets, validate the significant impact of these innovations on performance across diverse tasks and metrics, establishing new state-of-the-art results. Another reviewer has also acknowledged the significance of this contribution.\\n\\nWe believe our work offers a valuable contribution to the field and hope this additional explanation clarifies its novelty and significance.\\n\\nWe are open to further feedback and suggestions to improve the manuscript. Thank you for taking the time to review our work.\"}",
"{\"metareview\": \"This paper proposes a method for unpaired image-to-image translation. The key idea is to leverage consistency distillation on an implicit bridge model. Overall, the reviewers appreciate the task and the visualization of the paper. However, the reviewers have concerns over the contribution/novelty of the paper, and the significance of the empirical results. In particular, two reviewers expressed that the approach is indeed a combination and remained unconvinced after the discussion period.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion period, most of the questions regarding details and clarification have been addressed. While reviewer CDyb mentioned that part of the concerns were addressed, the reviewer also mentioned the combined nature of the approach and did not raise the rating. Next, unfortunately, Reviewer UgkH did not engage during the discussion period. In this case, the AC has checked over the authors' response and found that it would be unlikely that Reviewer UgkH would change the opinion. For example, Reviewer UgkH asked about the significance of the FID, stating ``If you repeat the experiment twice, the variance might be even larger.'' On the other hand, the authors responded with \\\"do not indicate variance as there is no inherent probabilistic mechanism\\\". This response misses the point, Reviewer UgkH is asking about the randomness from \\\"repeat the experiment twice\\\", e.g., if one were to train the model twice (with different random seeds) the result would not be the same. Next, the concerns about contribution and whether work is incremental is always a subjective matter. From the AC's perspective, the combination nature of an approach does not directly equate to a lack of contribution or novelty. However, the AC finds the motivation of such a combination to be a bit weak in the paper. Finally, the paper could be strengthened by conducting a user study on the improvement and providing more qualitative comparisons in the paper.\"}",
"{\"comment\": \"I thank the authors for addressing my concerns. But I lean to agree with reviewer UgkH about the approach's combination nature.\"}",
"{\"comment\": \"This is just a kind reminder as the deadline for the paper revision period is approaching. We are looking forward to hearing your feedback and will be happy to answer any further questions.\"}",
"{\"summary\": \"Diffusion models are widely used for image translation. This paper identifies limitations in existing approaches: slow inference, need for paired data, and one-way translation constraints. It introduces Implicit Bridge Consistency Distillation (IBCD) to address these issues. IBCD extends consistency distillation with a diffusion implicit bridge model. The paper proposes two improvements: Distribution Matching for Consistency Distillation (DMCD) and a distillation-difficulty adaptive weighting method. Experimental results show IBCD achieves state-of-the-art performance in one-step generation for bidirectional translation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Pro:\", \"The sampling speed in image-to-image translation is a critical problem in this area.\", \"The paper combines various techniques, including DDIB and consistency models.\"], \"weaknesses\": [\"Con:\", \"The main concern is that the method seems too incremental, appearing to be merely a combination of DDIB and consistency models.\", \"In Table 3, the FID improvement from adding Cycle and DMCD is marginal. Is the author aware of what a 0.1 FID change means? If you repeat the experiment twice, the variance might be even larger. This becomes a significant issue when the FID is so high. Also, most baselines in Table 2 show the variance of FID, while the author didn't. As you can see, the variance of other methods is quite large, further undermining the ablation study in Table 3.\", \"With only a single step, the stochasticity is significantly reduced. The authors should include several other related metrics that highlight diversity, such as the Inception Score. Additionally, more failure cases should be provided for better understanding of the method's limitations.\"], \"questions\": \"as above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**W1: The method seems too incremental, appearing to be merely a combination of DDIB and consistency models.**\\n\\nIn contrast to your misunderstanding, our work introduces a novel image translation model framework that uniquely satisfies four crucial properties: one-step, unpaired, bidirectional, and non-discriminator. This framework establishes a new state-of-the-art performance across various tasks and evaluation metrics while maintaining these desirable properties.\\n\\nExtending consistency distillation to DDIB, particularly in a bidirectional manner, has never been tried before due to several technique challenges. To address this, we have developed several novel approaches such as adapting consistency distillation to diffusion bridge trajectories (timesteps, training scheme), designing appropriate boundary conditions for bidirectional training, and carefully parameterizing the model for bidirectional diffusion bridge (Appendix B). Furthermore, we introduced additional auxiliary losses, strategically designed to mitigate the distillation loss and simultaneously improve reality and fidelity. These losses were carefully integrated into the IBCD framework with framework-aware modification (DMCD, adaptive DMCD, bidirectional cycle loss). \\n \\nExtensive ablation studies validate the significance of these advances in achieving our framework's superior performance across various tasks and evaluation metrics. Therefore, our work cannot be considered as incremental.\\n\\n\\n**W2. In Table 3, the FID improvement from adding Cycle and DMCD is marginal.**\\n\\nAs you correctly noted, the bottom three components of Table 3 do **not directly compare the superiority of FID**. Instead, they present a scenario where PSNR is considered in situations with similar FID values. \\n\\nTo clarify the comparison, we've updated Table 3 to benchmark against the lowest FID achievable by each individual component. Additionally, we've introduced a new metric, PSNR-Teacher, which uses the DDIB teacher's output as a PSNR ground truth. This metric allows us to assess the resolution of distillation error, a primary focus of our auxiliary loss approach.\\n\\nTable 3 demonstrates that each added component consistently reduces FID beyond the lower bound achievable by vanilla IBCD, while effectively mitigating the inherent PSNR trade-off and minimizing distillation error. Notably, updated Figure 6 highlights that even under identical PSNR conditions, the application of auxiliary losses consistently yields lower FID scores or vice versa.\\n\\n**Table 3: Quantitative ablation study results in the Cat\\u2192Dog task under the lowest FID.**\\n| Component | FID $\\\\downarrow$ | PSNR-teacher $\\\\uparrow$ | PSNR-source $\\\\uparrow$ |\\n|-|:-:|:-:|:-:|\\n| IBCD only | 48.12 | 18.27 | 19.02 |\\n| + DMCD | 44.40 | 17.95 | 16.80 |\\n| + DMCD & Cycle | 44.31 | 18.22 | 17.19 |\\n| + adap. DMCD & Cycle | 44.69 | 18.97 | 18.04 |\\n\\n**W3. Most baselines in Table 2 show the variance of FID, while the author didn't.**\\n\\nThe criteria for indicating and not indicating the variance of the results are the same as those adopted in many previous studies. Models that indicate variance in quantitative results (ILVR, SDEdit, EGSDE, SDDM) utilize a probabilistic sampler (SDE), generating probabilistic samples for each translation. Consequently, their variance is explicitly indicated. In contrast, models that employ non-deterministic sampling during the translation process (ours, CycleDiffusion, GAN-based models, etc.), do not indicate variance as there is **no inherent probabilistic mechanism in the translation process**.\\n\\n**W4. With only a single step, the stochasticity is significantly reduced. The authors should include several other related metrics that highlight diversity, such as the Inception Score.**\\n\\nTo address the reviewer's concern, we additionally conducted a quantitative diversity analysis using the **density-coverage metric**[1] in addition to the FID metric (updated Table 2). This metric, which separately evaluates quality and diversity, is more suitable for our tasks, as Inception Score's (Inception v3) classification granularity is insufficient to assess diversity within a single domain (dog or female). \\n\\nAs shown in the **Table 2 (please refer to the updated text)**, our one-step non-deterministic mapping approach outperforms baselines in both quality (density) and diversity (coverage).\\n\\n**W5. More failure cases should be provided for better understanding of the method's limitations.**\\n\\nPer your suggestion, we have updated Figure 11 to include failure cases for each task, providing a more comprehensive visualization of our model's performance (14 \\u2192 35 cases). \\n\\n**References:**\\n\\n[1] Naeem, Muhammad Ferjad, et al. \\\"Reliable fidelity and diversity metrics for generative models.\\\" *ICML* (2020).\"}",
"{\"summary\": \"The paper proposes to apply consistency distillation (CD) on previous DDIB, achieving a one-step generative model for unpaired image-to-image translation. The authors manage to extend the CD theory, which is applicable to two arbitrary distributions. The novel distribution matching and adaptive weighting techniques further stabilize and facilitate the training process. Both qualitative and quantitative experiments confirm the efficacy of the pipeline and the outperformance.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper provides a versatile pipeline for unpaired image-to-image translation within only one step, which outperforms most previous methods even with large NFEs.\", \"The theory part is clear and intuitive, and the toy data showcases the instability of vanilla IBCD clearly.\", \"The experimental results are convincing and impressive, demonstrating directly the outperformance.\", \"The novel adaptive weighting is interesting and effective, encouraging further study in diffusion model distillation with insight.\"], \"weaknesses\": [\"CD highly bases on PF-ODE, i.e., it needs to follow the score function trajectory. In Eq. (6), two PF-ODEs starting from different domains are connected together at $\\\\sigma_{max}$, how to guarantee the smoothness of score function (i.e., gradient) at this point (since one directly uses noisy $x_a$ to solver attached to domain B)? If not smooth, how will the error be like? The authors may provide analysis here similar to original CD paper.\", \"In L261, the authors claim one of the challenge is to employ only local consistency. However, CTM [1] refers to local consistency as the case when applying PF-ODE solver with extremely smaller step. On the contrary, when using two adjacent timesteps, CTM names it global consistency, similar to original CD. So in the paper, this should also be called a global consistency. I can hardly understand why such strategy is a challenge, given that most distillation works use such a loss.\", \"[1] Learning Probability Flow ODE Trajectory of Diffusion. Kim et al., ICLR 2024.\", \"The authors state that vanilla IBCD faces mean prediction phenomenon, but provides no convincing analysis on it. Original CD seems not to face such challenge. Does it come from the mismatch of two PF-ODEs? The visualization in Fig. 3(a) fails to convince me. The synthesized samples are not at the mean of domain B. Besides, I cannot see the efficacy of DMCD and cycle loss.\", \"The ablation study is somewhat confusing. Why vanilla IBCD is only a point rather than a broken line like the others? From Tab. 3 and Fig. 6, it seems that adaptive weighting may harm the performance, which is not consistent with conclusion in toy data. Conversely, DMCD is helpful in real data but fails in toy data. The authors may need further clarification.\"], \"questions\": [\"The authors propose to use one generator for two domains, which may be unreasonable or hard to achieve in practice. I think the whole pipeline is compatible with two independent pre-trained DMs, i.e., one ADM on LSUN Cat and one ADM on ImageNet with some specific class.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes a framework called Implicit Bridge Consistency Distillation (IBCD) for unpaired image to image translation. IBCD connects PF-ODE trajectories from any distribution to another one by extending consistency distillation with a diffusion implicit bridge model. It introduces Distribution Matching for Consistency Distillation (DMCD) and distillation-difficulty adaptive weighting method to deal with the distillation errors and mean prediction problems from the consistency distillation. Experiments on translation benchmarks demonstrate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The paper is well-written and it clearly explains the proposed method.\\n2) The visualizations of component\\u2019s cumulative contributions on the toy dataset in Fig. (3) help appreciate the role of each part.\\n3) Experiments on both toy and highdimensional datasets demonstrate the effectiveness of IBCD.\", \"weaknesses\": \"1) Missing comparison of the results for bidirectional translation.\\n2) Missing comparison of computation cost with the existing methods to show the efficiency of the proposed method.\\n3) The results in Tab. 3 show the model which added DMCD loss, cycle loss and adaptive DMCD degrades the performance in terms of PSNR and SSIM compared to the method using IBCD only.\\n4)\\u00a0The zero in the first row of Eq. (6) might be \\\\chi_A\\\\cap\\\\chi_B.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer UgkH,\\n\\nAs the discussion deadline is getting closer, we would like to kindly remind the reviewer that we are waiting for your valuable feedback to our responses. \\n\\nPlease note that we've addressed all your feedback by **clarifying our contributions**, **ablation studies**, and adding **diversity evaluation** and **failure cases**.\\n\\nSo we would greatly appreciate it if you could review our revised manuscript and provide your feedback.\\n\\nThank you for your time and consideration.\\n\\nBest,\\nAuthors\"}",
"{\"comment\": \"**W1. In Eq. (6), two PF-ODEs starting from different domains are connected together at $\\\\sigma_{max}$, how to guarantee the smoothness of score function (i.e., gradient) at this point (since one directly uses noisy to solver attached to domain B)? If not smooth, how will the error be like?**\\n\\nWe thank the reviewer for the insightful question. While IBCD introduces a continuous but non-differentiable point at the center of the PF-ODE trajectory ($t=0$), we argue that Theorem 1 of the original CD paper (Appendix A.2) remains applicable.\\n\\nFirst, we consider the Lipschitz condition for $f_\\\\theta (x_t,t)$. Since the primary difference between IBCD and CD lies in the $t$ direction, we focus on this aspect. The output of $f_\\\\theta$, which predicts the clean target domain image, remains constant along a given PF-ODE trajectory, regardless of $t$. Therefore, the Lipschitz condition is not affected by the non-differentiable point while the trajectory is continuous. Since the change in the $x_t$ direction is not different from CD, we can still use their Lipschitz assumption.\\n\\nSecond, we examine the local truncation error of the ODE solver. The non-differentiable point is precisely captured by our discretization scheme. The gradient used at this point is a combination of gradients from both sides of the trajectory, ensuring stable numerical integration. For example, consider an Euler solver. For the forward direction (domain A to B):\\n- The interval $i=[-1,0]$ uses the gradient at $-1$ (from domain A).\\n- The interval $i=[0,1]$ uses the gradient at $0$ (from domain B).\\n\\nOn the other hand, for the Backward direction (domain B to A):\\n- The interval $i=[0,1]$ uses the gradient at $1$ (from domain B).\\n- The interval $i=[-1,0]$ uses the gradient at $0$ (from domain A).\\n\\nConsequently, due to the properties of the consistency function and our careful handling of the non-differentiable point, the error bound of the consistency function remains $O((\\\\Delta t)^p)$ which is same as the original CD.\\n\\n**W2. In L261, the authors claim one of the challenge is to employ only local consistency. However, this should be called a global consistency following [1]. I can hardly understand why such strategy is a challenge, given that most distillation works use such a loss.**\\n\\nIn response to the reviewer's comment, we would like to clarify that the consistency loss employed in IBCD is indeed a form of **local consistency (vanilla CD)**, as defined in CTM. As extensively discussed in Section 5.2 and Appendix C.3 of CTM, this local consistency has key limitations, primarily its recursive nature.\\n\\nWe argue that this recursive nature of local consistency increases distillation errors in IBCD, as we will explain later. However, we have also revised our analysis of the sources of distillation errors in vanilla IBCD to adopt a more nuanced perspective:\\n\\n1.\\t**Model Capacity Constraints**: Unlike other methods, our student model is tasked with learning a bidirectional translation function using a single model, inherently limiting its capacity. Previous work [2] has demonstrated that bidirectional consistency models can underperform compared to vanilla CM.\\n\\n2.\\t**Combination of Different ODEs**: Unlike the Teacher model, where the two ODEs share a timestep and are differentiated solely by their initial conditions, the two ODEs in IBCD are entirely separate and interconnected. This can similarly impact model capacity and learning complexity.\\n\\n3. **Recursive Nature of Local Consistency Loss**: As the local consistency loss is applied recursively, local fitting errors accumulate sequentially from the boundary condition to the trajectory's end. Consequently, the translation process, involving a longer path, incurs a larger accumulated distillation error compared to the generation process (updated Figure 8).\\n\\n**W3. The authors state that vanilla IBCD faces mean prediction phenomenon, but provides no convincing analysis on it. Original CD seems not to face such challenge.**\\n\\nBased on your review, we acknowledge the lack of theoretical support for the mean prediction phenomenon. While we initially introduced this phenomenon to illustrate an example of distillation error in vanilla IBCD, we have removed this content from the main text. Instead, we now focus on why distillation error is an issue in vanilla IBCD (W2 and section 3.2). \\n\\nOur auxiliary losses were designed to address this fundamental distillation error and provide flexibility in balancing reality-faithfulness, not to specifically target the mean prediction phenomenon. Therefore, our core argument remains unaffected by this change. \\n\\n**W4. The ablation study is somewhat confusing. Why vanilla IBCD is only a point rather than a broken line like the others?**\\n\\nPer your suggestion, we've modified the vanilla IBCD model to also incorporate a trade-off curve format. The data point prior to the update signifies the initial state of the model before additional training with the auxiliary loss function.\"}",
"{\"title\": \"General Response\", \"comment\": [\"We thank the reviewers for their insightful comments and suggestions. We are pleased that the reviewers found our paper well-written, theoretically grounded, and clear, particularly in the validation of the toy experiment. We have carefully addressed all the points raised and made corresponding revisions to the manuscript, with corrections and additions highlighted in blue.\", \"**Key changes:**\", \"Added density-coverage metric to main experiement for a more comprehensive evaluation (Table 2)\", \"Update ablation studies (Table 3, Figure 6).\", \"Clarified the necessity of the auxiliary loss in vanilla IBCD (Section 3.2).\", \"Added quantitative comparison of real-world inference speeds (Appendix D.4, Table 5).\", \"Included quantitative comparison of bidirectional tasks (Appendix D.6, Table 6, Figures 12, 13).\", \"Added more failure cases (Figure 11).\", \"We will respond to each of the reviewer's comments individually, providing detailed responses to the reviewer's concerns. We hope that the revised manuscript and responses address the reviewer's concerns and provide a valuable contribution to the field. If you have any further questions, please feel free to discuss them at any time.\"]}",
"{\"comment\": \"**W5. I cannot see the efficacy of DMCD and cycle loss in toy data.**\\n\\nAll components contribute to the effectiveness of the toy experiment. When DMCD is applied alone, it reduces the number of samples translated to low-density regions, as evidenced by the decreased density between the spirals. However, this comes at the cost of reduced modal coverage, leading to thinner spiral arms. Introducing cycle loss helps to mitigate this issue by maintaining the reduction in low-density translations while expanding the modal coverage, resulting in wider spiral arms and lower density between spirals. The recovery in modal coverage with cycle loss can be attributed to the difficulty of mapping samples back to the source domain when they are translated to the same point (reduced modal coverage) in the target domain. Finally, adaptive DMCD further enhances the ability of DMCD to reduce low-density translations. In addition to mitigating distillation error, auxiliary losses provide a mechanism for flexible control over the trade-off between reality and faithfulness by allowing for the adjustment of weights like EGSDE (Figure 9).\\n\\n\\n**W6. From Tab. 3 and Fig. 6, it seems that adaptive weighting may harm the performance, which is not consistent with conclusion in toy data.**\\n\\nWe agree the potential negative impact of adaptive DMCD on the low reality-high faithfulness region. That said, the adaptive DMCD is usually a usful option since we can use adaptive DMCD when prioritizing FID and omitting it when SSIM is the primary concern.\\n\\nWhile a definitive analysis is still needed, we hypothesize that the interaction between cycle loss and adaptive DMCD might be the root cause. Cycle loss, by transforming teacher's ODE trajectories trained with consistency loss, could potentially disrupt the semantic interpretation of consistency loss magnitudes, which are crucial for adaptive DMCD weightings. High cycle loss might lead to trajectories that deviate significantly from the teacher's, potentially undermining the assumption that consistency loss magnitude correlates with trajectory learning difficulty. This speculative explanation suggests a promising direction for future research: adaptively applying cycle loss to mitigate conflicts between auxiliary losses.\\n\\n\\n**Q1. The authors propose to use one generator for two domains, which may be unreasonable or hard to achieve in practice. I think the whole pipeline is compatible with two independent pre-trained DMs.**\\n\\nAs you pointed out, our pipeline can be compatible with two independent teacher models with a simple modification. In this case, it can be used as is even if the two domains are not trained in a single model, which expands the scope of our framework. Thanks for the great comment.\\n\\n\\n**References:**\\n\\n[1] Kim, Dongjun, et al. \\\"Consistency trajectory models: Learning probability flow ode trajectory of diffusion.\\\" *ICRL* (2024).\\n\\n[2] Li, Liangchen, and Jiajun He. \\\"Bidirectional Consistency Models.\\\" arXiv preprint (2024).\"}",
"{\"comment\": \"Dear reviewer etdT,\\n\\nWe've carefully considered your valuable feedback and have made the following revisions to our manuscript and responses to your questions:\\n\\n1. Clarified the error induced by non-differentiable ODE trajectories.\\n2. Clarified the difficulty in vanilla IBCD and removed the term \\\"mean prediction phenomenon.\\\"\\n3. Clarified the ablation studies.\\n\\nAs the discussion period deadline is approach fast, we would appreciate it if you could provide your feedback whether our revision and rebuttal have fully addressed your concerns.\\n\\nWe appreciate your time and consideration.\\n\\nBest regards, Authors\"}"
]
} |
1Y5hMMuCFU | Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch | [
"Yuyang Ding",
"Xinyu Shi",
"xiaobo liang",
"Juntao Li",
"Qiaoming Zhu",
"Min Zhang"
] | The availability of high-quality data is one of the most important factors in improving the reasoning capability of LLMs.
Existing works have demonstrated the effectiveness of creating more instruction data from seed questions or knowledge bases.
Recent research indicates that continually scaling up data synthesis from strong models (e.g., GPT-4) can further elicit reasoning performance.
Though promising, the open-sourced community still lacks high-quality data at scale and scalable data synthesis methods with affordable costs.
To address this, we introduce ScaleQuest, a scalable and novel data synthesis method that utilizes ``small-size'' (e.g., 7B) open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints.
With the efficient ScaleQuest, we automatically constructed a mathematical reasoning dataset consisting of 1 million problem-solution pairs, which are more effective than existing open-sourced datasets.
It can universally increase the performance of mainstream open-source models (i.e., Mistral, Llama3, DeepSeekMath, and Qwen2-Math) by achieving 29.2\% to 46.4\% gains on MATH.
Notably, simply fine-tuning the Qwen2-Math-7B-Base model with our dataset can even surpass Qwen2-Math-7B-Instruct, a strong and well-aligned model on closed-source data, and proprietary models such as GPT-4-Turbo and Claude-3.5 Sonnet. | [
"large language models",
"mathematical reasoning",
"data synthesis"
] | Reject | https://openreview.net/pdf?id=1Y5hMMuCFU | https://openreview.net/forum?id=1Y5hMMuCFU | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"y3QzpriYoH",
"w9fa12UpAz",
"vZ16zmqSRH",
"nfKvjTzZzw",
"msrm16bfu3",
"jKOo0wbTjR",
"fLxJMldRvy",
"c5EDmAZFfq",
"WpKct7FDkE",
"Uf2nznh9NT",
"RaqYVEVIBI",
"Nwa8rHX4YI",
"NTPhkxPF3o",
"MohtuJfiWA",
"LigW51oPVl",
"ISFUJ1bRAB",
"I9ZEk2B9U9",
"El1DPCfAkW",
"Dhol5jBx7m",
"8lv7uvP1Y4",
"5MZTIQmy7Y",
"3Dqr5OE3jc",
"2WeuK02NeJ",
"1pNxG3DYWw"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment"
],
"note_created": [
1732287673218,
1732851215517,
1732288308277,
1730665026946,
1732287227645,
1730590672722,
1732544698754,
1732288506466,
1732333578423,
1732287858622,
1732769032188,
1732543938972,
1732490860989,
1732288657125,
1730706600918,
1729967083209,
1732287972458,
1737524111365,
1732544309887,
1732287572796,
1732288204930,
1732685673472,
1734610143178,
1732871826078
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_FQiU"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_KJ61"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_CoxX"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_KJ61"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_CoxX"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_FQiU"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_HR4m"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11215/Reviewer_FQiU"
],
[
"ICLR.cc/2025/Conference/Submission11215/Area_Chair_iwSp"
],
[
"ICLR.cc/2025/Conference/Submission11215/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer FQiU (Question)\", \"comment\": \"> To Question 1: The authors should compare different base models in Figure 5 and Table 2.\\n\\nPlease see our response to weakness 3.\\n\\n---\\n\\n> To Question 2: The experimental setup in the experimental module should be clearly presented, etc\\n\\nWe apologize for the confusion caused by our unclear descriptions. We have carefully reviewed and clarified the setup for each experiment in our revised version.\\n\\n- For Table 2, we ensured that the response generation process was consistent.\\n- For Figure 5, we added the explanation of the solvable ratio and difficulty score.\\n\\n---\\n\\n> To Question 3: The authors might discuss the effects of optimizing different question data volumes, etc\\n\\nWe have further explored the impact of varying training data volumes on QPO. Using Qwen2-Math-7B-Ins as an example, we conducted experiments with 5K, 10K, 15K, 20K, and 40K samples. The results are presented below, and we discuss them in detail in Appendix C of our revised version.\\n\\n| Train Data | Solvable Ratio | Difficulty Score |\\n| ---------- | -------------- | ---------------- |\\n| 0K | 75.8 | 49.6 |\\n| 5K | 81.5 | 50.8 |\\n| 10K | 83.8 | 50.9 |\\n| 15K | 84.5 | 50.7 |\\n| 20K | 84.9 | 50.9 |\\n| 40K | 85.2 | 51.0 |\\n\\n---\\n\\n> To Question 4: The author should probably compare the generated questions with the questions in the test set (n-grams or other methods) to prevent potential data leakage.\\n\\nWe appreciate the reviewer's concern about potential data leakage. To address this, we have conducted an n-gram similarity analysis between the generated questions and all test sets from both our dataset and other baseline datasets. Based on prior empirical analysis [1, 2], we set n=13 to prevent spurious collisions and calculated how much the test sets overlap with training data to assess data contamination. The table below illustrates the clean ratio across our dataset and baseline datasets, defined as the percentage of test samples containing no n-gram matches with the training set. The experiments and analysis have been updated in Appendix A of our revised version.\\n\\n| Train Data | GSM8K | MATH | College Math | Olympiad Bench | Average |\\n| ---------- | ------ | ------ | ------- | -------- | ------- |\\n| MetaMath | 99.77% | 92.20% | 100.00% | 99.70% | 97.92% |\\n| NuminaMath | 99.77% | 89.76% | 99.86% | 86.81% | 94.05% |\\n| DartMath | 99.77% | 91.46% | 100.00% | 99.56% | 97.70% |\\n| MMIQC | 99.77% | 88.04% | 98.90% | 97.93% | 96.16% |\\n| SacleQuest | 99.85% | 92.82% | 99.75% | 97.19% | 97.40% |\\n\\nThe results demonstrate that our dataset achieves a relatively high level of data cleanliness compared to other datasets, suggesting that our method generates novel questions instead of memorizing existing ones.\\n\\n[1] https://arxiv.org/abs/2005.14165\\n\\n[2] https://arxiv.org/abs/2109.01652\"}",
"{\"title\": \"Official comment\", \"comment\": \"Thanks for your Clarification\\n\\n\\n\\\"Training a Question Generator\\\" is not the firstly proposed by this paper. For example, [1] synthesizes 6 million math problems from their trained question generator. Also, I think training a question generator is not a very novel idea and only requires simple SFT techniques.\\n\\n\\n[1] JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models\"}",
"{\"title\": \"Response to Reviewer CoxX (Question)\", \"comment\": \"> To Question 1: How did the authors select the base difficulty filtering model for fine-tuning (Lines 222-239) and the reward filtering model (Lines 251-252)? etc\\n\\n- Selection of difficulty filtering model: The selection was inspired by DART-Math, which uses the accuracy of DeepSeek-7B-RL on a given question as a measure of its difficulty. We experimented with different models for training (DSMath-7B-Base and DSMath-7B-RL) and found that the results were similar.\\n- Selection of reward filtering model: This choice was primarily guided by the model\\u2019s performance on the reasoning subset of the Reward Bench.\\n\\nThank you for your suggestion! We have updated the discussion in the revised version.\\n\\n---\\n\\n> To Question 2: In Table 1, the term \\\"Synthesis Model\\\" in the header needs clarification, etc\\n\\nSorry for the confusion. Allow us to clarify:\\n\\n- DART-Math only includes response generation using DSMath-7B-RL. Other baselines use different synthesis models for both question synthesis and response generation, such as GPT3.5, GPT-4, and GPT-4o.\\n- For our approach, DSMath-7B-QGen and Qwen2-Math-7B-QGen are utilized for question synthesis, with Qwen2-Math-7B-Ins used for response generation. \\n\\nIf multiple models are used, only the most recently released one is marked. Additional details regarding the complete synthesis models for these datasets are provided in Figure 6. This clarification has been updated in the caption of our revised version.\\n\\n---\\n\\n> To Question 3: The left bar chart in Figure 5 has a confusing y-axis, etc\\n\\nThank you for your suggestion. We have updated the chart with clearer explanations for the solvable ratio and difficulty score. The difficulty score is indeed calculated based on the method described in Lines 377-406, and we have clarified this in the revised version.\\n\\n---\\n\\n> To Question 4: a rigorous human evaluation on a random subset would better demonstrate ScaleQuest\\u2019s quality, etc\\n\\nThank you for pointing this out. Human evaluation is indeed a more direct approach to demonstrate the quality of ScaleQuest.\\n\\nWe sampled 100 examples each from NuminaMath, and ScaleQuest and evaluated them based on **clarity**, **reasonableness**, and **real-world relevance**, with scores ranging from [1, 5]. (Please understand that due to the complexity of mathematical tasks, we limited the sample size to 40.)\\n\\n- In terms of clarity and reasonableness, our synthetic data surpasses NuminaMath but still falls short of the high-quality, real-world datasets like the training sets of GSM8K and MATH.\\n- Regarding real-world relevance, GSM8K leans toward practical, real-life scenarios, while MATH focuses more on theoretical mathematical derivations. Our generated data can be seen as a balance between the two.\\n\\nMore details have been updated in Appendix C.\\n\\n| | clarity | reasonableness | Real-world relevance |\\n| ---------- | ------- | -------------- | -------------------- |\\n| GSM8K | 4.4 | 4.5 | 3.9 |\\n| MATH | 4.1 | 4.3 | 2.4 |\\n| NuminaMath | 3.8 | 4.0 | 2.4 |\\n| ScaleQuest | 3.9 | 4.0 | 2.8 |\"}",
"{\"summary\": \"This paper proposes a scalable data synthesis method, ScaleQuest, for math reasoning. The augmented math datasets can enhance the model performance of mainstream open-source models such as Mistral, Llama3, DeepSeekMath, and Qwen2-Math. After finetuning the proposed dataset, the small open-source models can even outperform closed-source models such as GPT-4-Turno and Claude-3.5 Sonnet\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper provides a cost-effective data synthesis method for math reasoning problems.\\n2. The synthetic dataset can boost the performance of multiple open-source models in both in-domain and out-of-domain evaluation.\", \"weaknesses\": \"1. The main weakness of this paper is, that the proposed data synthesis pipeline is too complex and may be domain-specific. It includes the training in question fine-tuning, question preference optimization, the inference for solvability and difficulty check, reward scoring, etc. Although the API and training cost is not as expensive as GPT-4, this method is more time-consuming and requires extra effort to adapt to other domains.\\n2. The proposed data synthesis method is only evaluated in the math domain. It is unsure whether this method can be easily adapted to other domains such as code or logical reasoning. Specifically, can the question finetuning and question preference optimization trained on the math domain be directly used for other domains, or the extra finetuning for each domain and each stage is needed? \\n3. The experimental results are not easy to interpret: \\n(i) For the baselines with different synthetic datasets, are they finetuned on the same scale of training examples? \\n(ii) What does the Percentage and Accuracy in Figure 5 mean? Where is the legend of the left plot of Figure 5? \\n(iii) What does the question quality in Table 2 refer to? \\n4. There are many components in the data synthesis pipeline, but the impact of each component is not clear. For example, what if removing the question preference optimization and directly using the solvability filtering and difficulty sampling? This is different from the ablation study, which compares the performance w/ and w/o reward filtering while keeping all other components the same.\", \"questions\": \"There are plenty of LLMs used in the data synthesis pipeline: DeepSeekMath- 7B-RL , Qwen2-Math-7B-Instruct, GPT-4o-mini, GPT-4o, DeepseekMath-7B-Base, InternLM2-7B-Reward. Can you provide a Table for all the settings? Is there any specific reason to select different LLMs for different stages?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"General Response\", \"comment\": \"We sincerely thank all reviewers for the thoughtful and constructive feedback, as well as the time and effort in reviewing our work. The insights have been invaluable in helping us refine and improve our work.\\n\\nIn response to the feedback, we have submitted a revised version of the manuscript with the following major updates:\\n\\n- **Extension to another reasoning task:** We validated our approach on an additional reasoning task, *code reasoning*, as suggested by Reviewers KJ61 and CoxX. Our dataset outperformed the popular open-source dataset CodeFeedback, further demonstrating the effectiveness of our method. Details can be found in Appendix B.\\n- **Enhanced evaluations and analysis to validate effectiveness:** We conducted additional ablation studies and analyses, including **fair comparison experiments (same volume of training data)**, results on more base models, the impact of different training data volumes on QPO, additional results on OOD benchmarks, and human evaluations of the generated dataset. Further details are provided in Appendix C.\\n- **Corrections and clarifications:** We addressed typos and provided clearer explanations of the experimental setup to enhance understanding and reproducibility.\\n\\nWe are grateful for these insights, which have significantly contributed to improving the quality of our work.\"}",
"{\"summary\": \"The paper presents ScaleQuest, a scalable and cost-effective data synthesis framework designed to enhance the mathematical problem-solving capabilities of large language models (LLMs). Motivated by the need for high-quality, large-scale data, the authors propose a two-stage synthesis process. Specifically, ScaleQuest employs Question Fine-Tuning (QFT) to activate question-generation (QG) capabilities in small base models and Question Preference Optimization (QPO) to improve question solvability and difficulty. This is followed by filtering for language clarity, difficulty, and solvability, as well as reward-based response selection to ensure high-quality outputs. Experiments demonstrate that models fine-tuned with the ScaleQuest dataset outperform several baselines on benchmarks, achieving substantial improvements in accuracy across in-domain and out-of-domain mathematical reasoning tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. ScaleQuest targets data synthesis for instruction tuning, focusing on affordable low-cost methods. This approach demonstrates significant cost savings (Section 3.4), making large-scale data creation more accessible for open-source communities.\\n\\n2. The study includes thorough experimentation with multiple baselines, assessing both question and response quality across a total of four mathematical problem-solving benchmarks, thereby increasing the credibility of ScaleQuest.\\n\\n3. The paper is well-structured and quite easy to follow, with sufficient implementation details to enhance reproducibility.\", \"weaknesses\": \"1. As claimed by the authors in Lines 17-20 and 76-80, the main contribution of the paper is the scalable synthesis method, ScaleQuest. However, the method heavily depends on domain-specific fine-tuning and specialized models, which raises questions about its generalizability and applicability to domains beyond mathematical reasoning. For instance, the authors use Question Fine-Tuning (QFT) and Question Preference Optimization (QPO) to optimize the question generation process within the target domain of mathematical reasoning. Furthermore, the method involves components like solvability filtering, difficulty sampling, and reward filtering, each relying on different models and a specialized fine-tuned difficulty scorer, which appear tailored to mathematical data construction. This reliance on fine-tuned, domain-specific models, while effective in the tested domain, makes it challenging to adapt ScaleQuest to broader applications, potentially limiting its utility as a general-purpose data synthesis method.\\n\\n2. Additionally, the paper appears to make some overclaims regarding its scope and efficiency. While the title suggests an enhancement of \\\"reasoning capability,\\\" the paper narrowly addresses mathematical reasoning tasks, with little consideration given to other reasoning types, such as causal, commonsense, or logical reasoning. The claim of using \\u201csmall-size\\u201d models (Lines 18-19) is also somewhat misleading. Specifically, the QPO stage (Lines 199-202) requires a larger model, GPT-4o-mini, to achieve better preference-based filtering, suggesting that smaller models alone may not fully support the quality goals of ScaleQuest. The ablation results (Figure 5) further highlight the critical role of QPO, reinforcing the notion that the trade-off between model size and final data quality is not fully acknowledged, which impacts the efficiency claims of the method.\\n\\n3. Lastly, despite the authors\\u2019 assertions that ScaleQuest-generated data significantly enhances performance across various benchmarks, the observed improvements are marginal. For instance, Table 1 shows only a slight average increase from 62.7 to 62.9 when comparing Qwen2-Math-7B-ScaleQuest to its baseline Qwen2-Math-7B-Instruct, even with a decrease in performance on the CollegeMath benchmark. These limited gains suggest that the effectiveness of ScaleQuest\\u2019s synthesized data may not justify its complexity. Consequently, these modest gains raise concerns about the practical value and impact of the ScaleQuest approach.\", \"questions\": \"1. How did the authors select the base difficulty filtering model for fine-tuning (Lines 222-239) and the reward filtering model (Lines 251-252)? Considering that filtering significantly impacts final data quality (Figure 5), further discussion of criteria for model selection, along with any experimental comparisons, would enhance clarity on whether these models represent optimal choices.\\n\\n2. In Table 1, the term \\u201cSynthesis Model\\u201d in the header needs clarification. Does it refer to the model used for both question and response generation, or only response generation? This ambiguity is notable, especially as fine-tuned models such as Deepseek-QGen and Qwen2-Math-QGen are absent from the table. \\n\\n3. The left bar chart in Figure 5 has a confusing y-axis. Does the percentage indicate solvable/non-solvable or easy/difficult ratios? If it reflects these ratios, how does this relate to the five difficulty levels introduced in Lines 377-406? Detailing this connection would make the difficulty and solvability metrics clearer.\\n\\n4. Lastly, while evaluating synthesized data via difficulty distribution and solvability is helpful, a rigorous human evaluation on a random subset would better demonstrate ScaleQuest\\u2019s quality. Including human assessments of clarity, coherence, and real-world relevance could provide a nuanced verification of the synthesized data's effectiveness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Continued (Response to Concern 2)\", \"comment\": \"> To concern 2: The proposed pipeline is too complex with unnecessary complexity in the choice of LLMs, etc\\n\\nThank you for your feedback. We acknowledge and apologize for the confusion caused by directly presenting a complex setup. The approach we initially presented represents a culmination of all our insights, which were systematically validated through the subsequent ablation studies.\\n\\nTo address this, we have supplemented our revised version with a simpler setup to eliminate potential misunderstandings for readers. We used Qwen2-Math-7B-Ins for training question generators, constructing optimization data for QPO, performing solvability & difficulty filtering, as well as for response generation. For reward filtering, InternLM-7B-Reward remained unchanged. The results, as shown below (ScaleQuest-Simple), indicate that our approach continues to demonstrate superior performance compared to existing datasets. The corresponding results and analysis have been included in Appendix C of the revised version.\\n\\n| | Samples | GSM8K | MATH | College Math | Olympiad Bench | Average |\\n| -------------------------------- | ------- | -------- | -------- | ------------ | -------------- | -------- |\\n| Qwen2-Math-7B-MetaMath | 395K | 84.3 | 48.6 | 40.5 | 15.6 | 47.3 |\\n| Qwen2-Math-7B-DART-Math | 400K | 88.6 | 58.2 | 45.2 | 22.8 | 53.7 |\\n| Qwen2-Math-NuminaMath | 400K | 82.0 | 65.8 | 44.9 | 29.2 | 55.5 |\\n| Qwen2-Math-ScaleQuest | 400K | **90.6** | **71.6** | **50.2** | **36.2** | **62.1** |\\n| **Qwen2-Math-ScaleQuest-simple** | 400K | 89.4 | 69.9 | 48.8 | 33.6 | 60.4 |\\n\\n---\\n\\nRegarding GPT-4o, it is solely used as a manual substitute to evaluate solvability and difficulty for demonstrating the effectiveness of question preference optimization, and it is not involved in the overall ScaleQuest process. Additionally, we have included human evaluations, detailed in Appendix C, to further validate the effectiveness of our approach.\\n\\n---\\n\\nAdditionally, we believe it is worthwhile to summarize these insights on model selection for domain adaptation in the revised version:\\n\\n- **Selection of base model for training question generator:** The self-synthesis generation paradigm heavily relies on the inherent knowledge of the problem-solving model itself [1]. Therefore, a domain-specific model is essential. For example, Qwen2-Math-Ins is suitable for mathematical reasoning, while Qwen2.5-Coder-Ins fits well for code reasoning. Furthermore, using multiple question generators often leads to more diverse and higher-quality questions (as discussed in section 3.3). \\n- **Selection of model for constructing optimization data:** Well-aligned, general-purpose models, such as Llama3.1-70B and GPT-4o-mini, tend to perform better than domain-specific models, as illustrated in Figure 4.\\n- **Selection of Response Generation Model & Reward Model:** These can be selected based on their performance on the corresponding benchmarks, like MATH for mathematical reasoning, BigCodeBench for code reasoning, and Reward Bench for the reward model. We will also discuss the impact of different reward models, and the final results will be updated before the end of the author response period.\\n\\nIn fact, we did not fully utilize the most advanced and optimal models available. For instance, Qwen2.5-Math-72B-Ins would be a better choice for training the question generator and serving as a response generator, while Qwen2.5-Math-72B-Reward is undoubtedly a superior option for reward filtering.\\n\\nMoreover, math and code are two representative and non-trivial areas of focus within reasoning tasks [2]. To adapt to other domains, modifications should involve: (1) domain-specific problem-solving models for QFT and response generation, and (2) an optimization prompt tailored to the domain. More fine-grained optimization designs in QPO and the use of domain-specialized reward models (like Qwen2.5-Math-72B-Reward for Math Domain) could further improve performance. **Overall, We believe that leveraging the key insights outlined above can facilitate a straightforward adaptation to other reasoning domains.**\\n\\nWe also believe that **the methodology and the experience in selecting models are always more critical than the chosen models themselves**. With the continuous advancements in the open-source community, we are confident that stronger models will undoubtedly produce even better datasets when applying our approach.\\n\\n\\n\\n[1] https://arxiv.org/abs/2406.08464\\n\\n[2] https://arxiv.org/abs/2404.07503\"}",
"{\"title\": \"Response to Reviewer HR4m (Weakness)\", \"comment\": \"We sincerely appreciate your valuable feedback on our work. Your insights have provided us with an opportunity to refine and improve our paper.\\n\\n> To Weakness 1: The main experiments in Table 1 are somehow not very fair.\\n\\nExactly, data volume plays a significant role in shaping model performance. Achieving complete fairness in the comparisons, however, is challenging due to certain practical constraints: some datasets, such as WizardMath, MathScale, and KPMath, are closed-source, which limited our access to an equivalent number of training samples.\\n\\nTo ensure fairness, we have taken the following steps:\\n\\n- For open-source datasets, we plotted the scaling curve in Figure 1 (Right), illustrating the effectiveness of our approach given the same volume of training data.\\n- We further provide evaluation results on the same number of training samples (400K) from the public dataset, i.e., MetaMath, MMIQC, DART-Math, and NuminaMath, based on the Qwen2-Math-7B model. More details are shown in Appendix C of our revised version.\\n\\n| | GSM8K | MATH | College Math | Olympiad Bench | Avg |\\n| ------------------------ | -------- | -------- | ------------ | -------------- | -------- |\\n| Qwen2-Math-7B-MetaMath | 84.3 | 48.6 | 40.5 | 15.6 | 44.5 |\\n| Qwen2-Math-7B-DART-Math | 88.6 | 58.2 | 45.2 | 22.8 | 52.0 |\\n| Qwen2-Math-7B-NuminaMath | 82.0 | 65.8 | 44.9 | 29.2 | 53.7 |\\n| Qwen2-Math-7B-ScaleQuest | **90.6** | **71.6** | **50.2** | **36.2** | **60.6** |\\n\\n---\\n\\n> To weakness 2: I am wondering if their performance is similar on OOD test sets like GSM-hard, etc\\n\\nWe evaluate results on two other OOD benchmarks, including GSM-hard, MathChat.\\n\\n| Model | GSM-hard | MathChat Follow-up QA | MathChat Error Correction | Avg |\\n| ------------------------ | -------- | -------------------------- | ------------------------- | ---- |\\n| Qwen2-Math-7B-Instruct | 68.3 | R1: 89.5 R2: 62.4 R3: 53.5 | 89.9 | 72.7 |\\n| Qwen2-Math-7B-ScaleQuest | 66.3 | R1: 89.7 R2: 61.7 R3: 53.5 | 91.1 | 72.5 |\\n\\nOur model achieves comparable results to Qwen-Math-7B-Ins, demonstrating its generalization capability. More details can be seen in Appendix C of our revised version.\\n\\n---\\n\\n> To weakness 3: why the authors choose models like Qwen2-Math-7B, etc\\n\\nCompared to GPT-4o-mini, Qwen2-Math-7B offers **lower costs** and **more stable and predictable generation times**, which we consider crucial in large-scale data synthesis.\\n\\nFor instance, using GPT-4o-mini to perform solvability checks on 2 million generated questions is estimated to cost around `$600`, and managing such a large volume of API requests introduces additional uncertainty in time consumption. In contrast, using Qwen2.5-Math-7B requires only 110 GPU hours, offering a fixed processing time and significantly reduced costs (~ `$142.7`).\"}",
"{\"comment\": \"Thanks for the authors' efforts in rebuttal. The revised manuscript addresses my concerns about the unclear descriptions of experimental results and ablation studies.\\n\\nHowever, after reading the response, I would like to maintain my score for the following reasons:\\n\\n1. *The adaptation capability of the proposed method to a new domain is cost-consuming and ineffective.* The new results on code generation do not show a significant improvement over the existing datasets. Moreover, the adaptation requires retraining two question generation models, which is costly. The question preference model is reused for code, but their effectiveness is questionable due to the large difference between the math and the code questions.\\n2. *The proposed pipeline is too complex with unnecessary complexity in the choice of LLMs.* From the table of LLMs' settings, I do not think it is necessary to use so many different kinds of LLMs. For example, why not use the same Qwen2-Math-7B-Ins for Train Question Generator, Solvability filtering, Difficulty filtering, and Response Generation? Why not use GPT-4o-mini for both generation and evaluation if GPT-4o is too expensive? This makes it unclear whether the improvement comes from the pipeline design or the specific choice of LLMs. It also makes it more difficult to apply the proposed pipeline to a new domain effectively. For example, how to choose the correct LLM for each stage when transferring to the code domain? Is the current limited improvement mainly because the selection of LLMs is not optimal?\"}",
"{\"title\": \"Response to Reviewer KJ61 (Weakness 1-3)\", \"comment\": \"> To Weakness 1: The main weakness of this paper is, that the proposed data synthesis pipeline is too complex and may be domain-specific, etc\\n\\nThank you for your feedback. We would like to provide more information about reasoning data synthesis:\\n\\n- **General-purpose methods struggle with reasoning tasks:** Most general-purpose data generation methods face significant limitations when applied to reasoning tasks, as mentioned in works like Magpie [1].\\n- **Reasoning data synthesis requires a more complex process to ensure high-quality data:** Unlike general tasks, reasoning tasks demand higher data quality, which necessitates more sophisticated pipelines like question rephrasing [1], sub-topic extraction[2, 3], verification [1], and quality assessment [3]. Compared to previous works, our approach is more straightforward, only containing QFT, QPO, and filtering process.\\n- **Efforts in other reasoning tasks:** We extended our method to the Code Reasoning task as a simple validation, and the results are shown in the figure below. By keeping the answer generation model and data volume identical, our dataset outperformed the currently popular code dataset, CodeFeedback-Filtered [4]. More details can be seen in Appendix B of the revised version.\\n\\n| Model | | HumanEval | MBPP | BigCodeBench | Avg |\\n| ---------------- | ---- | --------- | ---- | ------------ | ---- |\\n| CodeFeedback-Raw | 156K | 79.3 | 77.2 | 35.6 | 64.0 |\\n| CodeFeedback-Aug | 156k | 84.1 | 84.7 | 39.0 | 69.3 |\\n| ScaleQuest-Code | 156k | 86.6 | 83.1 | 40.0 | 69.9 |\\n\\n[1] https://arxiv.org/abs/2309.12284\\n\\n[2] https://arxiv.org/abs/2403.02333\\n\\n[3] https://arxiv.org/abs/2403.02884\\n\\n[4] https://arxiv.org/abs/2402.14658\\n\\n---\\n\\n> To Weakness 2: The proposed data synthesis method is only evaluated in the math domain.\\n\\nThank you for your valuable feedback. To address this, we made minor adjustments to the ScaleQuest method to adapt to code reasoning tasks (details in Appendix B). Our experiments demonstrate that the resulting dataset achieves higher quality compared to the open-source CodeFeedback dataset. The results are provided above (see response in Weakness 1), with additional details included in Appendix B.\\n\\n---\\n\\n> To Weakness 3.1: For the baselines with different synthetic datasets, are they finetuned on the same scale of training examples?\\n\\nWe apologize for the confusion. The results in Table 1 do not strictly control for identical training data volumes due to practical constraints (e.g., some datasets are not publicly available).\\n\\nTo ensure a fair comparison, we made the following efforts:\\n\\n- For open-source datasets, we plotted the scaling curve in Figure 1 (Right), which demonstrates the effectiveness of our approach with the same volume of training data.\\n- Additionally, we provide evaluation results using Qwen2-Math-7B fine-tuned on 400K training samples drawn from public datasets (MetaMath, DART-Math, and NuminaMath). The new results are updated in Appendix C.\\n\\n| | GSM8K | MATH | College Math | Olympiad Bench | Avg |\\n| ------------------------ | ----- | ---- | ------------ | -------------- | ---- |\\n| Qwen2-Math-7B-MetaMath | 84.3 | 48.6 | 40.5 | 15.6 | 44.5 |\\n| Qwen2-Math-7B-DART-Math | 88.6 | 58.2 | 45.2 | 22.8 | 52.0 |\\n| Qwen2-Math-7B-NuminaMath | 82.0 | 65.8 | 44.9 | 29.2 | 53.7 |\\n| Qwen2-Math-7B-ScaleQuest | 90.6 | 71.6 | 50.2 | 36.2 | 60.6 |\\n\\n---\\n\\n> To Weakness 3.2: What does the Percentage and Accuracy in Figure 5 mean? Where is the legend of the left plot of Figure 5?\\n\\nWe are sorry for the confusion, \\n\\n- **Percentage:** In the solvability subplot, it indicates the proportion of generated questions judged as solvable. In the difficulty subplot, it represents the average difficulty score of the generated questions.\\n- **Accuracy:** This metric evaluates the impact of the synthesized dataset on model performance, specifically the fine-tuned model\\u2019s accuracy on the test dataset. Detailed explanations can be found in Section 3.1 under *Evaluation and Metrics.*\\n\\nWe have clarified these definitions in the figure caption of our revised version.\\n\\n---\\n\\n> To Weakness 3.3: What does the question quality in Table 2 refer to?\\n\\nWe are sorry for the confusion. \\\"question quality\\\" in Table 2 refers to instruction tuning effectiveness. \\n\\nThe purpose of Table 2 is to compare the instruction tuning effectiveness of questions from different open-source datasets. To ensure a fair comparison, we kept the answer generation process entirely identical.\"}",
"{\"title\": \"Further Clarification Regarding Your Concern\", \"comment\": \"Thank you for your valuable feedback.\\n\\nIncreasing the amount of data can indeed lead to some improvement, as we discuss in Appendix C of our revised version. However, the extent of this improvement is limited. This experience also aligns with the that of DPO when applied to response optimization. We believe that 10k\\u201315k training samples are a reasonable and balanced choice.\\n\\n---\\n\\nThe main concern lies in the effectiveness of QPO.\\n\\nWe would like to clarify that QPO is not the core technical contribution of our work; rather, it constitutes only a small part of our overall method, and we have not overclaimed QPO as a core contribution. ScaleQuest is a comprehensive framework comprising multiple components, and we are pleased to see that the effectiveness of other sub-methods, such as QFT, solvability analysis, and reward filtering, has been recognized.\\n\\nIn addition to offering technical contributions, we believe that proposing promising directions to address significant problems and tasks is equally important for the acceptance of this conference.\", \"we_would_like_to_clarify_the_contributions_of_our_work_as_follows\": [\"**Overall Contribution**: To the best of our knowledge, we are the first to propose the concept of \\\"**Training a Question Generator**\\\" for reasoning tasks, which we consider a promising direction for future research. We introduced QFT (Question Fine-Tuning) and QPO (Question Preference Optimization), which correspond to traditional instruction and preference tuning. While the effectiveness of QPO is currently limited, we demonstrate that questions themselves can be optimized to improve solvability, difficulty, and instruction tuning effectiveness. This is a promising direction worth further exploration, with significant potential for improvement. We believe that a more refined design for QPO could lead to greater enhancements. However, developing such sophisticated algorithms is beyond the primary focus of this paper, and we leave the concentrated study of question preference tuning for future work.\", \"**Effectiveness**: Our approach effectively tackles the challenge of data generation for reasoning tasks, demonstrating significant improvements over existing math-specialized models, as illustrated in Figure 1. Specifically, our data delivers a 46.4% accuracy improvement for Mistral-7B, 43.2% for Llama3-8B, 31.1% for DSMath-7B, and 29.2% for Qwen2-Math. Overall, our framework is both cost-efficient and our data ranks among the highest-quality open-source datasets for instruction tuning.\", \"**Contribution to open-source community**: Previous works heavily rely on the strong instruction-following capabilities of closed-source models like GPT-4, with some failing to publicly release their datasets and code. In contrast, our framework is fully open-source, allowing for the generation of data with both high quality and diversity. Furthermore, we have adapted our method to code reasoning tasks, with promising results detailed in Appendix B.\", \"We hope the above clarifications will help you reassess our contribution with a fresh perspective.\"]}",
"{\"title\": \"Additional Experiments\", \"comment\": \"We have completed the inclusion of all four base models in Table 2, as referenced in Weakness 3 and Question 1. The results have been updated in Appendix C of our revised version.\", \"using_mistral_7b_as_the_base_model\": \"| Method | GSM8K | MATH | College Math | Olympiad Bench | Average |\\n| --------------------- | -------- | -------- | ------------ | -------------- | -------- |\\n| Mistral-7B-MetaMath | 77.0 | 34.1 | 18.6 | 8.6 | 34.6 |\\n| Mistral-7B-OrcaMath | 84.4 | 31.6 | 20.9 | 8.2 | 36.3 |\\n| Mistral-7B-NumiMath | 79.5 | 62.8 | 40.4 | **30.4** | 53.3 |\\n| Mistral-7B-ScaleQuest | **88.5** | **62.9** | **43.5** | 28.8 | **55.9** |\", \"using_llama3_8b_as_the_base_model\": \"| | GSM8K | MATH | College Math | Olympiad Bench | Average |\\n| -------------------- | -------- | -------- | ------------ | -------------- | -------- |\\n| Llama3-8B-MetaMath | 77.6 | 33.1 | 20.6 | 9.2 | 35.1 |\\n| Llama3-8B-OrcaMath | 83.2 | 32.6 | 19.4 | 8.6 | 36.0 |\\n| Llama3-8B-NumiMath | 79.1 | 62.9 | 39.3 | **25.4** | 51.7 |\\n| Llama3-8B-ScaleQuest | **87.9** | **64.4** | **42.8** | 25.3 | **55.1** |\", \"using_qwen2_math_7b_as_the_base_model\": \"| Method | GSM8K | MATH | College Math | Olympiad Bench | Average |\\n| ------------------------ | -------- | -------- | ------------ | -------------- | -------- |\\n| Qwen2-Math-7B-MetaMath | 88.5 | 68.5 | 47.1 | 33.0 | 59.3 |\\n| Qwen2-Math-7B-OrcaMath | 89.3 | 68.3 | 46.6 | 31.9 | 59.0 |\\n| Qwen2-Math-7B-NumiMath | 89.5 | 72.6 | 49.5 | 36.3 | 62.0 |\\n| Qwen2-Math-7B-ScaleQuest | **89.7** | **73.4** | **50.0** | **38.5** | **62.9** |\"}",
"{\"title\": \"Reviewer\\u2019s Comments and Score Update\", \"comment\": \"The reviewer appreciates the authors\\u2019 efforts, particularly the inclusion of additional experiments on code reasoning and human evaluation, which effectively address most of my concerns. Based on these improvements, I have raised my score from 3 to 6 to reflect the enhanced quality of the work. However, I am open to deferring to the consensus of the other reviewers regarding the final decision.\"}",
"{\"title\": \"Response to Reviewer HR4m (Question)\", \"comment\": \"> To Question 1: Typo: Filering -> Filtering in line 215\\n\\nThank you for your feedback. We have carefully reviewed and corrected the typo in our manuscript.\\n\\n---\\n\\n> To Question 2: In Figure 5, it seems that QPO is less effective, etc\\n\\nWe would like to correct the misunderstanding that the purpose of QPO is solely to enhance the final SFT model performance. A key objective is actually to improve the efficiency of data generation. As shown in Figure 5, after applying QPO, the solvability of the generated questions increased by 8.2%, a significant improvement, indicating that the sample utilization rate for solvable questions is correspondingly higher.\\n\\nWhile the effect may appear minimal due to subsequent solvability filtering, our detailed analysis shows that 28.8% of unsolvable questions were filtered out in the baseline setting, whereas after QPO, only 19.4% were deemed unsolvable. This represents a 9.4% reduction in computational overhead.\\n\\n---\\n\\n> To Question 3: about the effectiveness of Solvability Filtering and Difficulty sampling, etc\\n\\nWe agree that some of the generated questions are not perfect in terms of solvability, which can be attributed to the model\\u2019s tendency\\u2014driven by hallucination\\u2014to attempt solving problems rather than assessing their solvability.\\n\\nAs for difficulty sampling, inspired by DART-Math\\u2019s insight that challenging questions can drive more effective learning, we empirically fit specific difficulty distributions by observing the patterns in difficulty distribution.\"}",
"{\"summary\": \"The paper introduces a novel framework for generating high-quality reasoning datasets using smaller open-source models. The primary focus is on addressing the challenges of synthesizing high-quality data at scale with affordable costs.\", \"key_contributions_of_the_paper_include\": [\"The authors present a scalable data synthesis method that enables the generation of 1 million question-answer pairs without relying on extensive seed data or complex augmentation techniques.\", \"The framework incorporates a two-stage process consisting of Question Fine-Tuning (QFT) and Question Preference Optimization (QPO), which enhances the question generation capabilities of the models.\", \"The paper demonstrates that models fine-tuned with the ScaleQuest dataset achieve significant performance gains compared to baseline models.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This article focuses on synthesizing mathematical problems using open-source large language models, which is an important topic. The fine-tuning and filtering techniques proposed by the authors demonstrate some effectiveness.\", \"The article presents a thorough and detailed set of experiments.\"], \"weaknesses\": [\"The proposed Question Preference Optimization (QPO) appears to be less effective; as shown in Figure 5, the difference between QPO and QFT is minimal, raising questions about the validity of QPO.\", \"This paper attempts to extract training data from models, similar to the approach of MAGPIE. Therefore, the authors should conduct a more fair and detailed comparison between Question Fine-Tuning (QFT) and direct prompting methods. In Figure 5, the authors generated 1 million question-response pairs using MAGPIE with Qwen2-Math-7B-Instruct as the \\\"raw data\\\" setting. However, the other settings filtered 2 million( from DeepSeekMath-QGen and Qwen2-Math-QGen) questions down to 1 million and applied a reward model to filter the responses. Consequently, it is difficult to determine whether QFT is more effective than the MAGPIE method or if the filtration of questions and responses is more effective.\", \"The ablation experiments are insufficient. The authors conducted experiments only on Llama3-8B, rather than comparing all four base models as presented in the main table.\", \"The authors suggest that the data generation method proposed in this paper can produce diverse and high-quality questions at a lower cost. However, with advancements in open-source models, previous sample-driven and knowledge-driven question synthesis models can also be replaced with open-source models. Moreover, Qwen2-Math, as a response synthesizer, demonstrates superior mathematical capabilities compared to earlier versions of GPT-4. Therefore, it is difficult to assert that the data synthesis approach presented in this paper is superior to other methods in cost.\"], \"questions\": [\"The authors should compare different base models in Figure 5 and Table 2.\", \"The experimental setup in the experimental module should be clearly presented; for instance, in Table 2, did the responses corresponding to questions from other datasets involve generating five responses and filtering down to one based on the reward model, or was only one response generated?\", \"The authors might discuss the effects of optimizing different question data volumes during QPO. Additionally, since the authors note that optimizing for both solvability and difficulty simultaneously in QPO is challenging, are there corresponding experimental results to support this?\", \"The author should probably compare the generated questions with the questions in the test set (n-grams or other methods) to prevent potential data leakage.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes a synthetic training data generation method for mathematical LLMs. Based on two small models at a 7B scale, the authors achieve state-of-the-art performance than other models trained with the data from larger LMs. The proposed method including question supervised fine-tuning, question preference tuning and reward-score-based selection.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"As for the method, ScaleQuest generates questions independently from scratch, removing dependency on existing question datasets, which enhances question diversity and supports scalability. Also, the paper integrates comprehensive filtering techniques, including language, solvability, and difficulty sampling, which could be a good reference for future efforts in data filtering.\\n\\nThe presentation is very clear, the workflow of the method is easy to follow. All the details such as prompts are all clearly given. The authors said they will release the data and code, which will be a useful resource to the community.\", \"weaknesses\": \"The main experiments in Table 1 are somehow not very fair. Some of the baseline methods contain less data than the used dataset in the paper.\\n\\nIn Table 1, it seems that Qwen2-Math-7B-ScaleQuest achieves similar performance with Qwen2-Math-7B-Instruct, I am wondering if their performance is similar on OOD test sets like GSM-hard (https://huggingface.co/datasets/reasoning-machines/gsm-hard) and MathChat (https://github.com/Zhenwen-NLP/MathChat). I would like to see if Qwen2-Math-7B-ScaleQuest is over-fitting on GSM and MATH style questions.\\n\\nFor the efficiency result, it seems that the cost is similar to (even slightly higher) GPT-4o mini if we put that in the table. I am wondering why the authors choose models like Qwen2-Math-7B instead of GPT-4o mini for solvability & difficulty check, etc.\", \"questions\": \"Typo: Filering -> Filtering in line 215\\n\\nIn Figure 5, it seems that QPO is less effective. Does the author try the combination of QFT and reward filtering only?\\n\\nI am curious about the effectiveness of Solvability Filtering and Difficulty sampling. For Solvability Filtering, it seems that the final dataset still does not have perfect quality but produces a good performance. So I am curious about the influence of the quality. For difficulty sampling, I am not sure why we need to fit certain difficult distributions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reponse to Reviewer KJ61 (Weakness 4 & Question)\", \"comment\": \"> To Weakness 4: There are many components in the data synthesis pipeline, but the impact of each component is not clear, etc\\n\\nWe would like to clarify the purpose of our ablation study, which is to demonstrate the effectiveness of each submethod in the pipeline. Our ablation results, as shown in Figure 5, include evaluations with **solvability** and **difficulty filtering**, highlighting their contributions.\\n\\nMore analysis has been updated in Appendix C.\\n\\n| +QFT | +QPO | +RF | Avg of 4 Benchmark |\\n| ---- | ---- | ---- | ------------------ |\\n| No | No | No | 42.2 |\\n| No | No | Yes | 44.1 |\\n| Yes | No | No | 52.9 |\\n| Yes | Yes | No | 53.3 |\\n| Yes | No | Yes | 54.7 |\\n| Yes | Yes | Yes | 55.1 |\\n\\n---\\n\\n> To Question: There are plenty of LLMs used in the data synthesis pipeline: DeepSeekMath-7B-RL , Qwen2-Math-7B-Instruct, GPT-4o-mini, GPT-4o, DeepseekMath-7B-Base, InternLM2-7B-Reward. Can you provide a Table for all the settings? Is there any specific reason to select different LLMs for different stages?\\n\\n| Stage | Models | purpose | Why to choose |\\n| ----------------------------- | ------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| Train Question Generator | DSMath-7B-RL; Qwen2-Math-7B-Ins | Activate the question generation capability of problem-solving models. | Two recent problem-solving models; Multiple generators contribute to greater data diversity. |\\n| Query Preference Optimization | GPT-4o-mini; GPT-4o | Construct preference data | GPT-4o-mini is better suited for following instructions to optimize questions (as shown in Figure 4); GPT-4o is used to judge whether generated questions are solvable. |\\n| Solvability filtering | Qwen2-Math-7B-Ins | Check the solvability of each generated question. | Large-scale generated data should be processed, so the 7B-scale model should be a cheaper choice. |\\n| Difficulty filtering | DSMath-7B-base | Generate score for each question | Inspired by DART-Math, we experimented with DSMath-7B-Base and DSMath-7B-RL and found only a minimal difference between them. |\\n| Response Generation | Qwen2-Math-7B-Ins | Generate response | Recent Math Problem-Solving Model. |\\n| Reward Filtering | InternLM-7B-Reward | Assign a reward score for each response and keep the one with the highest score. | This choice is primarily based on the model\\u2019s performance on the reasoning subset of the Reward Bench. |\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Further Clarifications and Experiments\", \"comment\": \"Thanks for the quick feedback. We would like to clarify some unclear points in our first round of the response. We hope the newly attached response and experiments mitigate some of your concerns.\\n\\n> To Concern 1: The adaptation capability of the proposed method to a new domain is cost-consuming and ineffective.\\n\\nWe apologize for any confusion regarding the settings. To address the concern about effectiveness in code reasoning task, we would like to provide further clarification based on the experimental results for code reasoning:\\n\\n- **CodeFeedback-Raw** refers to the public version, which consists of a condensed collection of problems from various open-source datasets. It is one of the most frequently used datasets for code reasoning. Our method demonstrates significant improvements over this dataset (5.3% in average accuracy).\\n- **CodeFeedback-Aug**: To ensure a fair comparison, we applied the same reward filtering method to enhance the responses of existing problems, resulting in CodeFeedback-Aug. The quality of the problems synthesized by our approach is comparable to, or even surpasses, that of CodeFeedback itself, which includes both **real-world questions and high-quality synthetic questions**. This result further highlights the high quality of the questions generated by our method.\\n\\n---\\n\\nRegarding the concern about cost-effectiveness, we would like to provide the following clarifications:\\n\\n- The training cost for the question generator is minimal, requiring only around 20 GPU hours (approximately 2.5 hours on an 8 A100-40G-PCIe server), which constitutes just 3% of the total data synthesis cost, as shown in Table 4. While using a single question generator is entirely feasible, employing multiple generators yields better results, as discussed in *Multiple question generators enhance data diversity* in Section 3.3. We also simplify our approach and the results can be seen in our following response to concern 2.\"}",
"{\"title\": \"Response to Reviewer FQiU (Weakness)\", \"comment\": \"> To Weakness 1: The proposed Question Preference Optimization (QPO) appears to be less effective; as shown in Figure 5, the difference between QPO and QFT is minimal, raising questions about the validity of QPO.\\n\\nWe agree that the impact of QPO on the final SFT model\\u2019s performance is limited. Allow us to provide further clarification:\\n\\n- **Regarding the limited SFT performance:** QPO specifically targets question generation and does not directly affect the response generation process, which explains the relatively modest performance gains. However, our results demonstrate that QPO is effective, achieving consistent improvements in the solvability and difficulty of generated questions, as well as in the overall SFT effectiveness.\\n- **Beyond SFT performance:** Though the impact of QPO on SFT may be minimal, it significantly enhances the **data generation efficiency.** Specifically, QPO improves the solvability of generated questions from 75.4% to 83.6%, a meaningful enhancement that boosts the efficiency of data utilization. While the effect may appear minimal due to subsequent solvability filtering, our detailed analysis shows that 28.8% of unsolvable questions were filtered out in the baseline setting, whereas after QPO, only 19.4% were deemed unsolvable. This represents a 9.4% reduction in computational overhead.\\n\\n---\\n\\n> To Weakness 2: the authors should conduct a more fair and detailed comparison, etc\\n\\nThank you for pointing out this unfair setting. To ensure a fair comparison and demonstrate the effectiveness of QFT, we controlled the question generator to be consistent (using DSMath-7B-RL and Qwen2-Math-7B-Ins, each generating 1M data points, i.e., **2M in total for Magpie**). We then applied language, solvability, and difficulty filtering, followed by response generation and reward filtering. This process resulted in approximately 1.2M data points for SFT. The results (based on Qwen2-Math-7B) are as follows:\\n\\n| Method | GSM8K | MATH | College Math | Olympiad Bench | Avg |\\n| ------ | ----- | ---- | ------------ | -------------- | ---- |\\n| Magpie | 75.9 | 47.7 | 38.2 | 14.6 | 44.1 |\\n| Ours | 89.7 | 73.4 | 50.0 | 38.5 | 62.9 |\\n\\nThe corresponding results have been updated in the revised version (Figure 1 and Figure 5).\\n\\n---\\n\\n> To Weakness 3: The ablation experiments are insufficient, etc\\n\\nThank you for your feedback, and we sincerely apologize for any shortcomings in our ablation experiments. We have added Figure 5 to include results using Qwen2-Math-7B as a base model, providing a broader comparison.\\n\\nWe hope you understand that the SFT process is quite time-consuming for us. We will try to complete the experiments for the remaining two base models and include the results before the end of rebuttal period.\\n\\n---\\n\\n> To Weakness 4: with advancements in open-source models, previous sample-driven and knowledge-driven question synthesis models can also be replaced with open-source models, etc\\n\\nExactly, more advanced models tend to generate better questions and answers. Here are our key observations:\\n\\n**Limited effectiveness of open-source models in question sample/knowledge-driven approaches:** Some math-specialized models, such as Qwen2-Math-7B, are not well-suited for sample-driven or knowledge-driven approaches. These methods demand the model to generate valuable questions under complex constraints (e.g., specific topics or knowledge points), a task that these problem-solving models often struggle with. For example, when we used Qwen2-Math-7B-Ins to optimize given questions, the quality of the generated questions even declined (as shown in the top plot of Figure 4). Similarly, general-purpose models like Llama3-8B also faced challenges in producing high-quality questions due to their lack of mathematical specialization.\\n\\nThis may be why many sample-driven and knowledge-driven approaches rely on highly aligned closed-source models like GPT-4. This also highlights the unique value of ScaleQuest in enabling open-source models to overcome these limitations.\\n\\n**Efficiency advantages of our approach:** Our question generation process requires only a minimal number of tokens (e.g., a BOS token), and the generated content directly serves as the final question without any redundant steps. In contrast, sample-driven or knowledge-driven methods often rely on heavily constrained prompts and multiple rounds of verification. **This will result in significantly higher consumption of input and output tokens, leading to greater computational overhead, regardless of the model used**.\"}",
"{\"title\": \"Response to Reviewer CoxX (Weakness)\", \"comment\": \"> To Weakness 1: This reliance on fine-tuned, domain-specific models, while effective in the tested domain, makes it challenging to adapt ScaleQuest to broader applications, potentially limiting its utility as a general-purpose data synthesis method.\\n\\nWe sincerely apologize for any misunderstandings and would like to clarify that our data generation method in this work is tailored to reasoning tasks, where there is a significant scarcity of high-quality data.\\n\\nMany general-purpose data generation methods currently fall short in reasoning tasks such as mathematics and code [1, 2]. Reasoning has already become a crucial component of data synthesis [3], with numerous studies dedicated specifically to this area.\\n\\nApart from Mathematical Reasoning, we extended our method to the Code Reasoning task as a simple validation, and the results are listed in our response to weakness 2. By keeping the answer generation model and data volume identical, our dataset outperformed the currently popular code dataset, CodeFeedback-Filtered [4]. More details can be seen in Appendix B of the revised version.\\n\\n[1] https://arxiv.org/abs/2406.08464\\n\\n[2] https://arxiv.org/abs/2312.15685\\n\\n[3] https://arxiv.org/abs/2404.07503\\n\\n[4] https://arxiv.org/abs/2402.14658\\n\\n---\\n\\n> To Weakness 2: the paper appears to make some overclaims regarding its scope and efficiency, etc\\n\\nThank you for your valuable feedback. In response, we included another important task, **code reasoning**, as a simple validation, and the results are as follows:\\n\\n| Model | | HumanEval | MBPP | BigCodeBench | Avg |\\n| ---------------- | ---- | --------- | ---- | ------------ | ---- |\\n| CodeFeedback-Raw | 156K | 79.3 | 77.2 | 35.6 | 64.0 |\\n| CodeFeedback-Aug | 156k | 84.1 | 84.7 | 39.0 | 69.3 |\\n| ScaleQuest-Code | 156k | 86.6 | 83.1 | 40.0 | 69.9 |\\n\\nWith the same amount of data, our approach outperforms the widely used dataset CodeFeedback-Filtered. Additionally, we augmented the responses for the problems in CodeFeedback-Filtered using the same response generation process as our method, creating a new dataset, CodeFeedback-Aug. The results demonstrate that our approach still achieves superior performance, highlighting the higher quality of the questions generated by our method.\\n\\nMore details are shown in Appendix B in our revised version.\\n\\n---\\n\\nFor the QPO stage, smaller models exactly struggle to follow instructions effectively to optimize given questions (as demonstrated in Figure 4, where Qwen2-Math-7B-Ins performs poorly on this task). However, **the data cost for QPO is minimal, requiring only 10K examples**. If human-annotated data were available, it would further enhance the process. GPT-4o-mini was used as a substitute in the absence of available data. Additionally, we experimented with open-sourced Llama3.1-70B-Ins and found that it outperforms GPT-4o-mini in this task.\\n\\nGPT-4o-mini (solvable ratio): 72.2 -> 83.7\\n\\nLlama3.1-70B-Ins (solvable ratio): 72.2 -> 86.7\\n\\n---\\n\\n> To Weakness 3: effectiveness of our method, etc\", \"we_would_like_to_clarify_a_misunderstanding\": \"our model is not fine-tuned on Qwen2-Math-7B-Ins but rather on the base model, Qwen2-Math-7B-Base.\\n\\nRegarding the effectiveness of our method, we would like to emphasize two key points:\\n\\n- **Qwen2-Math-7B-Ins** demonstrates strong performance on mathematical tasks; however, **none of its training data has been made publicly available**. This means that while the model itself can be used, the underlying data cannot be leveraged for further developments or customized, tailored applications.\\n- In contrast, our dataset, generation method, and implementation details are fully open-sourced. While **Qwen2-Math-7B-Ins serves as a teacher model and can be viewed as an \\\"upper bound\\\"**, our work achieves comparable performance and even surpasses it on certain tasks, further demonstrating the effectiveness of our approach.\"}",
"{\"title\": \"Official Comment by Reviewer FQiU\", \"comment\": \"Thanks to the author for his detailed reply to my concern.\\n\\n- Judging from the additional experiments, the improvement brought by the QPO design is too small. Maybe it is even better to generate more questions in the question generation stage?\\n\\n- As can be seen from Figure 5, MAGPIE has also improved to a certain extent after passing Question and Response Filter, but it is still lower than the author's method. If the generation conditions (sampling coefficients) are consistent, the method proposed in this article can indeed be better than MAGPIE. Efficiently extract training data from the model.\\n\\nOverall, the QFT proposed by the author has certain effectiveness on synthetic data (QPO effects are too small, and FIlter technology is existing). \\n\\nI will adjust the soundness from 2 to 3, but I still think that the technical contribution of this article cannot meet the acceptance threshold of this conference.\"}",
"{\"metareview\": \"This paper presents ScaleQuest, a framework for generating high-quality mathematical reasoning datasets using smaller open-source models. The authors propose a two-stage process combining Question Fine-Tuning (QFT) and Question Preference Optimization (QPO) to enable generation of diverse math problems without extensive seed data. Using this approach, they generated 1 million problem-solution pairs and demonstrated that models fine-tuned on this dataset achieved substantial improvements of 29.2% to 46.4% on the MATH benchmark, with competitive performance against larger proprietary models.\\n\\nWhile the paper addresses an important problem of making training data generation more accessible and cost-effective, there are several critical limitations that warrant rejection. First, the proposed pipeline is unnecessarily complex, involving multiple stages and different models without clear justification for these design choices. This complexity not only makes the method difficult to implement and reproduce, but also raises questions about its practical utility compared to simpler approaches. The authors have not adequately demonstrated why such a complicated system is necessary over more straightforward alternatives.\\n\\nA second major concern is the domain specificity of the approach. While the results in mathematical reasoning are promising, the method appears to be heavily tailored to this particular domain, with multiple components specifically designed for mathematical problem generation. The authors provide no evidence or compelling argument that their approach could generalize to other important domains like code generation or logical reasoning. This significantly limits the broader impact of the work.\\n\\nThe experimental evaluation also has several concerning issues. The comparisons with baselines lack consistency in terms of training data scale, making it difficult to draw meaningful conclusions about the method's effectiveness. Several key results, particularly in the ablation studies, are inadequately explained or justified. The paper fails to provide clear insights into the relative importance of different components in the pipeline, leaving readers uncertain about which elements are truly essential for the method's success.\\n\\nFurthermore, the selection of different models for different stages of the pipeline appears arbitrary and lacks proper justification. This raises questions about whether the reported improvements are truly attributable to the proposed method rather than simply the careful selection of specific model combinations.\\n\\nWhile the goal of making high-quality training data generation more accessible is valuable, and some of the empirical results are intriguing, these fundamental issues with complexity, generalizability, and experimental rigor make it difficult to assess the true value and broader applicability of the proposed method. A more focused approach with clearer justification for design choices, better analysis of component contributions, and demonstration of potential generalizability would be necessary for the work to meet the bar for acceptance. Therefore, I recommend the rejection of this paper in its current form.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion phase, three main concerns were raised by the reviewers. First was the domain specificity of the method, particularly highlighted by reviewers KJ61 and CoxX who questioned whether ScaleQuest could generalize beyond mathematical reasoning. Second was the complexity of the experimental setup and clarity of results, with reviewers FQiU and KJ61 noting issues with baseline comparisons and ablation studies. Third was the general complexity of the pipeline and model selection criteria, raised by all reviewers.\\n\\nThe authors attempted to address these concerns in their response. They extended their evaluation to code reasoning tasks, showing competitive performance against CodeFeedback. They also provided additional ablation studies and analyses, including controlled experiments with equal training data volumes and human evaluations of the generated datasets. The response included clarifications about experimental setup and model selection.\\n\\nHowever, these responses don't fully address the fundamental concerns about the method. While the extension to code reasoning is a positive step, it still represents a relatively narrow domain expansion and doesn't demonstrate broader generalizability. The additional analyses, while helpful, highlight rather than resolve the complexity of the approach - revealing even more components and parameters that need careful tuning.\\n\\nMost importantly, the authors' response reinforces concerns about the method's practical applicability. The need for extensive ablations and analyses suggests that successfully implementing this method requires significant expertise and resources, potentially limiting its practical value to the community despite its theoretical cost advantages over using large proprietary models.\\n\\nThese discussion points ultimately strengthen the case for rejection. While the authors made sincere efforts to address the reviewers' concerns, their responses highlight rather than resolve the fundamental issues with complexity and generalizability that make this work difficult to build upon or adapt to new domains.\"}",
"{\"title\": \"Author Responses\", \"comment\": \"Thanks for your feedback.\\n\\nRegarding your comment on \\\"training a question generator\\\", we would like to provide the following clarifications:\\n\\n- In Jiuzhang 3.0, the \\\"question generator\\\" can be seen as a distillation version of GPT-4 (as mentioned in their abstract: \\\"*we create a dataset using GPT-4 to distill its data synthesis capability into the small LLM*\\\"). The primary focus is on \\\"distillation\\\", whereas our concept of \\\"training\\\" goes beyond the limitations of the teacher model, offering the potential for a higher performance ceiling.\\n- More importantly, we believe it is more accurate to refer to such models as \\\"**question extractors**\\\" rather than \\\"question generators\\\", as it extracts potential valuable questions from large-scale pretraining data. This extractive approach has already been explored in previous works [1,2,3]. The extracted data often (1) heavily depends on pretraining corpora and (2) lacks high quality, which limits its applicability for fine-grained instruction fine-tuning (IFT) in recent works.\\n\\nRegarding the concept of \\\"training a question generator\\\" proposed in our paper, we would like to explain as follows:\\n\\n- The \\\"question generator\\\" for IFT should be capable of generating questions from scratch [4] or based on specific topics or knowledge [5, 6], which has been demonstrated to produce higher-quality and more diverse data for Instruction Fine-Tuning.\\n- Similar to Jiuzhang 3.0, GPT-4 may still be a better choice if cost and closed-source limitations are ignored. However, this highlights the significance of our work: we propose a fully open-source solution, which we see as a valuable contribution to the open-source community.\\n- While, as you mentioned, our method might not excel in finer design aspects like QPO, we believe its potential for further improvements has been acknowledged from your questions. From this perspective, our work can serve as a strong baseline for future research. We also believe that the eventual success of alternative methods or models will build upon this recognized \\\"potential\\\".\\n\\nThank you again for your feedback. We hope the above clarifications address your concerns, and look forward to further discussions.\\n\\n[1] Instruction Pre-Training:Language Models are Supervised Multitask Learners\\n\\n[2] Augmenting Math Word Problems via Iterative Question Composing\\n\\n[3] JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models\\n\\n[4] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing\\n\\n[5] MathScale: Scaling Instruction Tuning for Mathematical Reasoning\\n\\n[6] Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning\"}"
]
} |
1XzTxtezgj | Intervention-based Causal Discrimination Discovery and Removal | [
"Cong Su",
"Guoxian Yu",
"Jun Wang",
"Yongqing Zheng",
"Han Yu"
] | Causal inference is a recent and widely adopted paradigm to deal with algorithmic discrimination. Building on Pearl's structure causal model, several causality-based fairness notions have been developed, which estimates the unfair causal effects from the sensitive attribute to the outcomes by incorporating the intervention or counterfactual operators. Among them, interventional fairness (i.e., $K$-Fair) stands out as the most fundamental and broadly applicable concept that is computable from observantional data. However, existing interventional fairness notions fail to accurately evaluate causal fairness, due to their following inherent limitations: (i) the causal effects evaluated by interventional fairness cannot be uniquely computed; (ii) the violation of interventional fairness being zero is not a sufficient condition for a causally fair model. To address these issues, we firstly propose a novel causality-based fairness notion called post-Intervention Cumulative Ratio Disparity (ICRD) to assess causal fairness of the decision models. Subsequently, we present a fairness framework (ICCFL) based on the proposed ICRD metric. ICCFL firstly generates interventional samples, and then computes the differentiable approximation of the ICRD to train a causally fair model. Both theoretical and empirical results demonstrate that the proposed ICRD effectively assesses causal fairness, and ICCFL can better balance accuracy and fairness. | [
"Fairness",
"Causal inference",
"Intervention-based metric"
] | Reject | https://openreview.net/pdf?id=1XzTxtezgj | https://openreview.net/forum?id=1XzTxtezgj | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"nBorQLsqa4",
"emqhX9eP3n",
"YArK4AMJZ8",
"Wv0QkOyVVL",
"ErisUYQk0b",
"D4Jsmxs8pw",
"7IvDxJG28g"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"meta_review",
"official_review",
"official_review"
],
"note_created": [
1737524025412,
1730718524002,
1730848257211,
1730688989349,
1734792200482,
1731110274379,
1730627995816
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10091/Reviewer_GHKa"
],
[
"ICLR.cc/2025/Conference/Submission10091/Reviewer_mDmt"
],
[
"ICLR.cc/2025/Conference/Submission10091/Reviewer_P7r5"
],
[
"ICLR.cc/2025/Conference/Submission10091/Area_Chair_9JDq"
],
[
"ICLR.cc/2025/Conference/Submission10091/Reviewer_CYP9"
],
[
"ICLR.cc/2025/Conference/Submission10091/Reviewer_zFL6"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The paper introduces a new fairness metric called Intervention-based Cumulative Ratio Disparity (ICRD), which aims to address limitations in existing causal fairness metrics (K-Fair) by measuring cumulative causal effects along prediction probabilities by intervening on sensitive attributes. Additionally, the authors propose a fairness framework, ICCFL, which incorporates the ICRD metric to train fairer models. Through theoretical and empirical analyses, the paper demonstrates that ICCFL better balances fairness and accuracy than existing fairness-aware algorithms across multiple datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors clearly illustrated the limitations in existing interventional fairness metrics, and the related works section is comprehensive and easy to follow.\\n\\n2. The proposed formulation of ICRD is sound and the authors provide the theoretical analysis on how ICRD addresses the limitations of existing causal fairness metrics.\\n\\n3. The authors proposed a fairness framework, ICCFL, which incorporates a differentiable approximation of the ICRD metric to enable efficient training.\", \"weaknesses\": \"1. The proposed method assumes the causal model is known, which may be a strict assumption. It would be great for the authors to discuss the sensitivity of the proposed metric and framework to potential causal graph misspecification.\\n\\n2. This paper assumes the sensitive attribute is binary. Could the proposed metric be extended to handle multiple sensitive attributes?\\n\\n3. The method leverages causal generative models to infer the distribution of exogenous variables. It would be useful to explore the robustness of the approach when estimating interventional distributions with different causal generative models.\", \"questions\": \"Please refer to the questions in Weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper adds to the causal fairness literature by proposing a new metric to measure unfairness and a strategy for training fair models. It follows previous work by Salimi et al (2019) and Ling et al (2023) on interventional fairness or K-fairness. An algorithm is K-fair if interventions on the sensitive attribute do not change the predictions, while also causally conditioning on a given context K. The current paper extends this definition by applying a 1-Wasserstein distance to the difference between the interventional distributions, with interventions on the sensitive attribute. The proposed training strategy is empirical risk minimization with a penalty term added using the aforementioned 1-W. distance. The paper includes a few basic theoretical results and experiments comparing the method with several alternate methods on several datasets.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The overall setting is well-chosen and the contribution appears to be solid.\", \"weaknesses\": \"Compared to the existing work I believe this paper is somewhat incremental. The novelty is not high. The experiments are OK. The presentation and explanations of both current work and its context in related literature are not very clear.\", \"questions\": \"Since the main contribution of this paper is to build on the K-fair definition, what could you do specifically to include a more comprehensive and clear explanation of K-fair? How is the set of contexts C chosen, and which contexts were used in the experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"Motivated by shortcomings of existing interventional fairness notions, this paper proposed a new causality-based fairness notion called post-Intervention Cumulative Ratio Disparity (ICRD). ICRD measures the cumulative causal effects along prediction probabilities by intervening on the sensitive attribute. The authors explained ICRD\\u2019s superior properties over existing intervention causal fairness notions. Additionally, they developed a new fairness framework based on ICRD: Intervention-based Cumulative Causality Fairness Learning approach (ICCFL) formulates a constrained optimization problem where the ICRD metric is included in the prediction loss of the model. Empirical evidence from comparing ICCFL with several benchmark methods demonstrated that ICCFL could attain better causal fairness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper improved the existing interventional fairness notion, K-Fairness, in a comprehensive way that both develops a new fairness notion and proposes an algorithm for applying the new fairness notion. The authors also provided relevant theoretical support for the validity of both ICRD and ICCFL, which add to the technical soundness of the paper.\\n\\n2. The paper provided useful details in the experiment evaluation of the ICCFL method: Section 5.3 offered empirical evidence for the benefit of ICRD, and Section 5.4 discussed observations related to hyperparameter choice in ICCFL.\", \"weaknesses\": \"1. Given that the differences between the K-Fair notion and the new ICRD notion are somewhat subtle, the paper could benefit from clearer explanations. For example, Example 1 used to discuss limitation 1 might be applied again after introducing ICRD to illustrate how ICRD applies here, such as, what are the possible contexts C in this example. On a related note, although the ICRD notion has clear advantages over the K-Fair notion, it is unclear whether these advantages alone justify adding ICRD to the already large number of causal fairness definitions. It would be helpful to discuss the benefits of ICRD as a causal fairness definition in general.\\n\\n2. The ICRD notion centers on disparity in the cumulative causal effects. This is not necessarily desirable for understanding discrimination, as we may be more interested in dissecting the causal effects associated with specific scenarios. It would be helpful to discuss potential insufficiencies of the ICRD notion, for example, when ICRD may not be identifiable, when enforcing ICRD to be 0 may be too restrictive for fairness.\", \"questions\": \"1. How does the ICRD notion address limitation 1?\\n\\n2. Why does interventional fairness have fewer identifiability challenges compared to the counterfactual fairness and path-specific fairness, as mentioned on line 228, page 5? \\n\\n3. Can the ICCFL method be compared with any benchmark methods using other causal fairness notions, such as, path-specific fairness? This might reveal interesting observations on the comparison between ICRD and other non-intervention based causal fairness measures.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper proposed a fairness metric that measures cumulative causal effects along prediction probabilities by intervening on sensitive attributes\", \"strengths\": [\"Studies an important limitations in existing interventional fairness measures\"], \"weaknesses\": [\"The contribution appears somewhat incremental, with reviewers questioning whether the advantages of ICRD justify adding another fairness definition to an already crowded field\", \"The paper makes strong assumptions about knowing the causal model and having binary sensitive attributes, with limited discussion of robustness to model misspecification\", \"The method underperforms on conventional metrics (accuracy and K-fairness) compared to benchmarks, and the paper lacks comparisons with non-intervention based causal fairness measures\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers are largely in agreement that the paper in its current form does not meet the acceptance threshold.\"}",
"{\"summary\": \"This paper proposes a novel causality-based fairness notion called post-intervention Cumulative Ratio Disparity (ICRD) to assess the causal fairness of the decision models, and then presents a causal framework based on ICRD. The theoretical and empirical results show that ICRD can assess causal fairness and the causal framework can better balance accuracy and fairness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper proposes a novel notion to measure causal fairness. This notion makes intuitive sense and seems easy to implement.\", \"The paper proposes a new algorithm to train a model, where the causal fairness notion is cast as a regularization term.\", \"On several empirical datasets, the proposed algorithm seems to perform best in terms of causal fairness, as compared to several benchmarks.\"], \"weaknesses\": [\"It seems that the proposed algorithm is not very competitive as compared to benchmarks if one primarily cares about conventional metrics e.g., K-fair and accuracy.\", \"The theoretical results are quite intuitive, and the proof is straightforward. It would be helpful to the contributions of the paper, and why the contributions are nontrivial to obtain.\", \"The references of this paper do not contain a single ICLR paper. It would be helpful to better demonstrate the fit of this paper to ICLR.\"], \"questions\": \"Please address the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper demonstrates the limitations of the existing interventional fairness and then proposes a new causal fairness metric called Intervention-based Cumulative Rate Disparity (ICRD). ICRD aims to measure the post-intervention cumulative causal effects along the prediction probabilities for any intervention on the context. In addition to defining this metric, the authors propose an algorithm designed to achieve ICRD.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"It is reasonable and meaningful to uncover the limitations of the existing fairness notions and propose a new one.\", \"The experimental results show the effectiveness of the proposed method.\"], \"weaknesses\": [\"The motivation behind ICRD is somewhat ambiguous. Specifically, regarding sufficiency, how is a condition defined as \\u2018sufficient\\u2019 for evaluating causal fairness? It seems that the sufficiency aspect depends significantly on the particular causal fairness definition in use, and the current explanation feels unclear on this point. The insufficiency aspect could benefit from greater elaboration.\", \"Lines 313\\u2013314 state that \\u201cICRD encompasses K-Fair and represents the cumulative causal effect of K-Fair across all decision thresholds,\\u201d but this claim is difficult to interpret without additional clarification. Similarly, it is unclear how Table 1 was generated or how the decision threshold impacts outcomes. Could the authors further clarify these aspects?\", \"Finally, I am unconvinced that the decision threshold\\u2019s impact constitutes a limitation of K-Fair. K-Fair requires two distributions to be equivalent; hence, it is unclear how the decision threshold would influence this requirement. More discussion on this would be valuable to fully understand the claimed limitation.\"], \"questions\": \"Please refer to the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1XxNbecjXe | Soft Prompts Go Hard: Steering Visual Language Models with Hidden Meta-Instructions | [
"Tingwei Zhang",
"Collin Zhang",
"John Xavier Morris",
"Eugene Bagdasaryan",
"Vitaly Shmatikov"
] | We introduce a new type of indirect, cross-modal injection attacks against language models that operate on images: hidden "meta-instructions" that influence how the model interprets the image and steer its outputs to express an adversary-chosen style, sentiment, or point of view. We create meta-instructions by generating images that act as soft prompts. In contrast to jailbreaking attacks and adversarial examples, outputs produced in response to these images are plausible and based on the visual content of the image, yet also satisfy the adversary's (meta-)objective. We evaluate the efficacy of meta-instructions for multiple models and adversarial meta-objectives, and demonstrate how they "unlock" capabilities of the underlying language models that are unavailable via explicit text instructions. We describe how meta-instruction attacks could cause harm by enabling creation of self-interpreting content that carries spam, misinformation, and spin. | [
"security",
"machine learning",
"adversarial perturbations",
"large language models"
] | Reject | https://openreview.net/pdf?id=1XxNbecjXe | https://openreview.net/forum?id=1XxNbecjXe | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yVAm3LKz0e",
"t5C727SW14",
"l6QmbXrulI",
"iPNEIekqpB",
"dxJCtPvJyy",
"dHR7gQFjNq",
"dGjd8TT8ln",
"ZtaUPtUyhf",
"ZG6sDIsUAt",
"YcVm9yplSI",
"UWPUKU82F7",
"TmLuEMbJlB",
"SvWEtAzOIe",
"QJlkOHYNvr",
"NRBQTUyuyn",
"MNy3HwV1I7",
"M1WuswTlnk",
"EHFJTMYRSQ",
"6cpyX5rRhB",
"54Ge5CKrOM",
"3cwevFmjHZ",
"05gWzOtk4p"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review"
],
"note_created": [
1730669781304,
1732822317279,
1732721685330,
1732723520994,
1732379389612,
1730338074197,
1732380086886,
1732664405856,
1737523838848,
1732683695589,
1732664471551,
1730671268414,
1732554852369,
1732516657796,
1732379014246,
1730538516543,
1732380777570,
1732555002731,
1732664303192,
1732378939011,
1732664328542,
1734760595726
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_j1Tw"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_YgaV"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_g6PQ"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_YgaV"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_j1Tw"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_g6PQ"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_j1Tw"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_VLtV"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_VLtV"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Reviewer_YgaV"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7440/Area_Chair_vNgV"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces a method to create image inputs to Vision Language Models (VLMs) that lead said model to respond to any user query appended to the image with a certain \\\"spin\\\", e.g. responding with a certain sentiment, or in a certain language. The authors refer to this as embedding a \\\"meta-instruction\\\" in an image.\\n\\nCritically, a meta-instruction attack is only successful if the models response to the users query (and the attacked image) responds to the query whilst following the meta-instruction (e.g., if the meta-instruction was \\\"talk in French\\\" and the model responded in French but did not answer the users query, then this would not be a successful attack).\\n\\nTo train these meta-instruction attacks, the authors perform projected gradient descent on an image to minimize the language modeling loss of the VLM inputted with this image over a dataset of synthetic question answer pairs with the answers following some target natural language meta-instruction.\\n\\nThe results of the paper demonstrate that this method can be used to learn adversarial images for various different types of meta-instructions. The authors also demonstrate a non-trivial transfer of meta-instruction images between models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"## Originality\\n\\nThe question of prompt injection vulnerabilities to large language models is of significant importance. The authors demonstrate that models are vulnerable to similar attacks of this nature through their vision input as are possible through their text input. What's more, they show the vulnerability is in some cases worse through the image input.\\n\\nWhilst the idea of providing meta-instructions through image inputs its not entirely novel (see weaknesses section), this paper is the most thorough treatment of the subject that I am aware of, and brings to light new and concerning ways that a model's output can be tampered with using images.\\n\\n## Quality and clarity\\n\\nThe paper is well written the method is conveyed clearly. The results section contains a good depth of experiments, most importantly covering a number of popular open-source VLMs and target meta-instructions.\\n\\n## Significance\\n\\nAs VLMs are used more frequently for agentic tasks that will expose them to untrusted data from the internet, prompt injection / meta-instruction attacks will become more and more concerning. Thus the paper concerns a timely and interesting threat model that the adversarial attack community should be exploring in more detail.\", \"weaknesses\": \"While the critique is long, this is only because I believe the paper has interesting results that could be improved.\\n\\n## Presentation of previous work\\n\\nThe authors make a number of claims about prior work that I believe are not completely accurate. Editing the language around these claims would help to improve the paper. Here are some examples that I believe need to be addressed:\\n\\n- Line 32 - \\\"But injection attacks in non-text modalities are a new, yet-to-be-explored area of LLM safety research.\\\" I do not think this is entirely true. For example, Bailey et al. [1] explore how train an image to convey a certain text prompt, which they demonstrate can be a prompt injection attack. \\n- Line 83 - \\\"By design, jailbreaking and adversarial examples produce contextually incoherent outputs that do not actually answer users\\u2019 questions about images.\\\" I think this depends on how you define an image jailbreak. For example, Dong et al. [2] produce adversarially perturbations to harmful images that lead to a model answering coherently about said image --- in particular the model is able to correctly identify what is in the image. While the authors claim here is correct for other image jailbreaking work, such as Qi et al. [3] who learn images unrelated to the harmful request they are trying to elicit a response about from the model, it is not universally correct. For this reason the claim should be softened.\\n- Line 84 - \\\"They [jailbreaking and image adv attacks] are not stealthy and cannot be used for indirect attacks because users would notice that the VLM\\u2019s outputs are wrong given the conversation context and inputs.\\\" Bailey et al. [1] and Qi et al. [3] both demonstrate methods to create jailbreaking images under epsilon ball constraints, which is the definition of stealthiness the authors use on line 290. \\n\\n## Novelty / originality\\n\\nFollowing on from some of the comments above, I believe there is a question of novelty / originality of this work. \\n\\nIn particular, the general algorithm presented to produce meta-instruction attacks essentially involves creating a dataset of input output pairs, and training an image by PGD to maximize the likelihood over this dataset. This method appears to fit into the \\\"Behavior Matching\\\" algorithm from Bailey et al. [1] \\n\\nDespite this, I believe the work does contain novel and important contributions. In particular: \\n1. The study of the changes in semantic meaning present in images from various different attacks, with meta-instruction attacks preserving meaning.\\n2. The transfer experiments in Table 4 are very interesting.\\n3. This is the most thorough treatment of prompt injection image attacks I have seen.\\n\\n## Summary\\n\\nCombining the above two points, I believe the paper needs to be\\nrewritten to more clearly lay out the novelty of the paper \\nand more accurately represent the papers contribution.\", \"my_high_level_suggestions_would_be\": \"1. Make it clear that prior works have examined prompt injecting image attacks, however yours is a more complete treatment of the topic.\\n2. Make it clear that your method to create such attacks is a special instance of what prior works have introduced. \\n3. From this, your novelty comes not from the method but rather the results. E.g. line 88 that reads \\\"We design, implement, and evaluate a method for creating a new type of image perturbations that act as cross-modal soft prompts for a language model while preserving the visual semantics of the image.\\\" needs to be adjusted.\\n4. Given that I do not think the method is novel, I would suggest running the following additional experiments:\\n\\t1. In Table 4, add transfer results to Claude and GPT-4o. These results should feature in the transferability experiment.\\n\\t2. More detailed defense experiments. Appendix C shows fairly simple defenses can work to avoid meta-instruction attacks. [1] finds that training perturbations under different constraints (e.g. a moving patch) ends up being more robust to simple defenses. It would be interesting to see if this result is reproducible in your setting.\\n\\nTo reiterate, I think studying prompt-injection images to models is important, and the authors present valuable results. I thank the authors for their hard work! \\n\\n\\n[1] - Bailey, Luke, et al. \\\"Image hijacks: Adversarial images can control generative models at runtime.\\\" arXiv preprint arXiv:2309.00236 (2023).\\n\\n[2] - Dong, Yinpeng, et al. \\\"How Robust is Google's Bard to Adversarial Image Attacks?.\\\" arXiv preprint arXiv:2309.11751 (2023).\\n\\n[3] - Qi, Xiangyu, et al. \\\"Visual adversarial examples jailbreak aligned large language models.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 19. 2024.\", \"questions\": \"I summarize some of my comments on weaknesses of the paper into questions below:\\n\\n1) Do the authors agree with my comments about their portrayal of previous works, and if so what steps are the authors taking to address this? Concretely, what sections of the paper have been rewritten.\\n2) Have the authors been able to run the suggested experiments I have mentioned above, and if so what did they find?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the update. It seems that the evaluation on very simple defense (Random Resizing and Cropping) as well as more advanced approach (like DiffPure suggested by Reviewer VLtV) is missing. However, these evaluations are crucial to validate how practice the attack is.\"}",
"{\"comment\": \"Thanks for your response. I apologize for the late reply. Overall, I feel like the final outcome remains unchanged, i.e., steer the model to generate harmful outputs, regardless of whether the user is the attacker or the victim as mentioned in the general response. I will keep my rating.\"}",
"{\"comment\": \"Thank you for your reply. When the user is the victim, the goal is *not* just to steer the model to generate adversarial outputs. The model has to generate adversarial outputs *that actually answer the user's questions about the image* (e.g., in misinformation, spin, negative bias, etc. scenarios). Otherwise, the attack is neither stealthy, nor makes sense for the adversary.\\n\\nPrior methods cannot do this. They *either* answer questions, *or* generate out-of-context harmful outputs unrelated to the content of the image. The primary contribution of our work is to show how to steer a model to maintain the conversation and correctly answer queries about the image while also satisfying an adversarial objective (e.g., an adversary-chosen interpretation of the content).\"}",
"{\"comment\": \"Thank you for your review.\\n\\n1 and 2. Evaluation with Diverse Datasets and diverse prompts: Thank you for the suggestion! We conducted additional experiments using images from MSCOCO and tripled the number of test queries. The results showed a similar trend to our original paper: images with hidden meta-instructions performed comparably to explicit instructions and outperformed the no-attack baseline.\\n\\n| LLaVA | No attack | Explicit text instructions | Our attack |\\n|------------------------------|-----------|----------------------------|------------|\\n| Positive | 0.45 | 0.8 | 0.65 |\\n| Negative | 0.02 | 0.46 | 0.35 |\\n| Neutral | 0.53 | 0.6 | **0.7** |\\n\\n3. Effectiveness Against Inference-Time Defenses: We appreciate the mention of inference-time defenses like DISCO, DiffPure, and IRAD. We evaluated test-time defenses such as JPEG compression and anomaly detection, which have shown efficacy against adversarial examples targeting visual chatbots in recent works. The defenses mentioned in the review are primarily designed for adversarial attacks against CNN classifiers, whereas our work targets VLMs, a different task and architecture.\"}",
"{\"summary\": \"This paper proposes a new attack objective in which the output text remains consistent with the input images but adopts an adversary-chosen style, sentiment, or point of view. The adversarial optimization is applied to the input image, ensuring that the modifications are imperceptible to humans. Experiments demonstrate that images containing hidden meta-instructions achieve significantly higher success rates compared to those with explicit instructions. This attack highlights a practical risk, as it enables the dissemination of seemingly coherent but misleading information.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The focus on the dissemination of seemingly coherent misinformation is highly practical and addresses a significant real-world concern.\\n\\n2. The evaluation is thorough, including robustness testing against JPEG compression as a defense (which I suggest moving to the main text, given its practicality in everyday use) and examining the transferability of the attack across different vision-language models (VLMs).\", \"weaknesses\": \"1. A NeurIPS 2024 paper [1] also explores the dissemination of seemingly coherent misinformation in visual language models, but through the lens of data poisoning. While this paper focuses on test-time adversarial attacks, it would be beneficial to discuss the key differences between test-time attacks and training-time poisoning, and in what scenarios each is more practical, given the similarity in objectives between the two papers.\\n\\n2. The evaluation of image semantics preservation seems suboptimal. In Section 5.3, semantics are defined using cosine similarity between images, but it is unclear why this metric is particularly relevant. A more meaningful evaluation would assess how well the actual text output of the visual language model aligns with the input images, which is the core focus of this paper\\u2014consistent outputs with images but in adversary-chosen styles, sentiments, or viewpoints.\", \"reference\": \"[1] Xu, Yuancheng, et al. \\\"Shadowcast: Stealthy data poisoning attacks against vision-language models.\\\", The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024\", \"questions\": \"1. Could you also provide an evaluation when random resizing or cropping is applied? Since this paper addresses practical concerns, it would be valuable to test your method under common \\u201cdefenses\\u201d encountered in everyday scenarios.\\n\\n2. Are there any failure cases? For example, are there meta-instructions that are particularly difficult to achieve?\\n\\n3. Why is it necessary to evaluate cosine similarity as done in Section 5.3? Could you clarify the relevance of this metric?\\n\\n4. Is there an evaluation that checks whether the generated textual outputs remain consistent with the input images?\\n\\nOverall, I appreciate the practical focus of this paper. I would be happy to raise my evaluation if these concerns are addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your review.\\n\\n**Summary of Contribution and Novelty**: See meta comment for addressing the novelty concern.\", \"weakness\": \"1. Line 32 - Injection Attacks in Non-Text Modalities: We are doing injection attacks for arbitrary adversarial objectives while preserving the model\\u2019s conversational capability, ie, model\\u2019s outputs (1) correctly respond to users\\u2019 queries about the images, and simultaneously (2) satisfy an adversary-chosen predicate. Because of (1), outputting strings from a predefined distribution that satisfies the adversarial predicate (as prior methods do) is not sufficient.\\n2. Line 83 - Coherence in Jailbreaking and Adversarial Examples: Dong et al. indeed demonstrate adversarial perturbations that produce contextually coherent outputs, but they achieve this by forcing the model to generate text strings from a specific distribution *independent of any user prompts*. \\n3. By contrast, when queried about our images, models correctly respond to user prompts about image content and produce outputs that are both coherent in the conversational context *and* follow the adversary\\u2019s instruction.\\n4. Line 84 - Stealthiness and Contextual Coherence: By \\\"not stealthy,\\\" we mean that prior jailbreaking and adversarial examples attacks produce outputs that are obviously incorrect given the image (e.g., toxic strings unrelated to the image, or incorrect descriptions of the image). This is very noticeable to a human user. By contrast, our goal is to produce responses that are plausible given the image yet follow the adversary\\u2019s instruction \\u2013 please see examples in the paper.\", \"additional_experiments\": \"1. Transfer Results to GPT-4o: We tested transferability of our image soft prompts against GPT-4o. They slightly improve the instruction-following rate for the \\u201cgenerate outputs with a neutral spin\\u201d instruction.\\n\\n| Method | Positive | Negative | Neutral |\\n|-----------|----------|----------|---------|\\n| No Attack | 0.27 | 0.03 | 0.7 |\\n| Transfer | 0.25 | **0.08** | **0.96** |\\n\\n2. Defense Experiments: We evaluated the moving patch attack against Llava and obtained the following results. \\u201cOur method\\u201d is the basic soft prompt method; \\u201cour method patch\\u201d is the same method adapted to evade the JPEG defense; \\u201c+ JPEG\\u201d are the results against the JPEG defense. Bold numbers indicate where the attack works as well as or better than the no-attack baseline. These results show that the patch attack slightly improves the evasion.\\n\\n| Method | Positive | Neutral | Negative |\\n|---------------------------|----------|---------|----------|\\n| No-attack | 0.39 | 0.58 | 0.03 |\\n| Our method | **0.66** | **0.6** | **0.47** |\\n| Our method + JPEG | 0.32 | 0.53 | **0.35** |\\n| Our method patch | **0.65** | 0.3 | **0.62** |\\n| Our method patch + JPEG | **0.55** | 0.2 | **0.43** |\", \"questions\": \"1. See meta comment for addressing the novelty concern.\\n \\n2. Please see the experiments above.\"}",
"{\"comment\": \"Thank you for your reply.\\n1. Our goal is not to outperform explicit text prompts. Our threat model is indirect injection: a benign user asks queries about images generated by the adversary. The adversary\\u2019s goal is for the VLM to answer the user\\u2019s queries *as if* instructed by the adversary\\u2019s text prompt. In this threat model, we show that adversarial images (acting as soft prompts) outperform the no-attack baseline and achieve success rates similar to an explicit text instruction from the adversary. This is exactly the goal of the indirect injection attack.\\n\\n\\n We selected meta-objectives based on the most downloaded text classification models on Hugging Face (Appendix B.3), for two reasons. First, they make it possible to evaluate the results, which requires measuring whether responses to queries about images satisfy the meta-objective. Second, they reflect which properties of text HuggingFace users are most interested in. We welcome and appreciate suggestions for additional meta-objectives.\\n\\n2. We used GPT-4 to generate 100 natural queries (40 training queries + 60 testing queries) per image, which we believe provides a realistic simulation of potential user queries. We welcome suggestions for how to increase diversity of queries.\\n\\n3. Due to the rebuttal period constraints, we could not include experiments with DISCO, DiffPure, and IRAD but will consider adding them in future revisions.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Reviewer Response\", \"comment\": \"I thank the authors for the time they spent updating the paper. Having read the new version, I think the paper now better presents the contributions of previous work while highlighting the novelty of the author's work. I also thank the authors for looking into the prompt matching method and agree with their conclusion in the above comment and lines 212 and below. It was also great to see the new transfer results in the paper.\\n\\nIn light of this I am updating my score by two levels to a 6. Thank you again for your hard work!\\n\\n[A minor side point] On line 259 you say that Bagdasaryan et al. present a method that optimizes an image to force a model to output a given string. Having looked at the paper I agree, so possibly they should be cites in the caption and body of Figure 2 (in addition to their current citation in the body of the text)? There could be subtle differences in the techniques I am not seeing however meaning they should not be included in the figure, and I defer to the authors on including this or not.\"}",
"{\"title\": \"Meta comment for the PDF update\", \"comment\": \"We updated Figure 2 to illustrate the difference between our method and prior research. Reviewer concerns regarding novelty and differentiation with prior work have been addressed in lines 032\\u2013042, 093\\u2013096, and 207\\u2013247. Additionally, we included transfer results for GPT-4o in lines 503\\u2013511 and for evasion of JPEG defense in lines 903\\u2013913.\\n\\nWe also plan to include evaluation of MSCOCO images for all adversarial objectives as suggested by reviewer VLtV but are still running experiments, thus these results are not yet in the PDF. We reported the results for the sentiment meta-objectives in the previous response.\"}",
"{\"summary\": \"The paper introduced an attack that enables adversaries to add stealthy \\u201cmeta-instructions\\u201d to images that influence how visual language models respond to queries about these images\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Figures clearly illustrate the point of the paper.\\n2. The writing is easy to follow\\n3. Articulate the attack model and assumptions\\n4. Run transferability test\", \"weaknesses\": \"1. L33, \\\" but injection attacks in non-text modalities are a new, yet-to-be-explored area of LLM safety research\\\". This type of attack has been widely explore in [1] and [2]\\n2. L81, \\\"users are victims of adversarial third-party content that they ask the model to process\\\". I'm curious whether the images are generated by the users or not. If the user create the line chart shown in Fig. 1 from their local devices, does it mean the attack studied in the paper doesn't exist anymore?\\n3. Table 4, why is the transfer rate of llava on negative as low as 0.1?\\n4. I'm curious what will happen if the system prompt of the VLM contradicts with the meta-instruction in the image?\\n5. Overall, I think the paper is in a good quality. The major downside is the novelty, as we already know from previous work that optimizing the input image towards a certain attack target is feasible for VLM. Thus, it's not a new vulnerability in VLM. Though the author attempts to differentiate their attack setting from previous jailbreaking and soft prompt attacks, the overall attack surfaces and methods remain largely the same. I would like to the see more insights coming from the paper. \\n\\n\\n[1] Are aligned neural networks adversarially aligned?\\n[2] A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reviewer Response\", \"comment\": \"Thank you for providing a detailed response to my questions! (and apologies for my slow reply).\\n\\n## Presentation of previous work\\n\\nThe authors have convinced me that their work has a different flavor to prior work, in particular that they focus very directly on images that preserve the coherence and overall meaning of model responses. That being said, I still believe that the specific quotes I picked out in my response **misrepresent prior work**. I believe I am asking for a fairly reasonable change in language to these quotes. I would recommend the authors change the language in the paper pertaining to these sections to better reflect prior work. In particular:\\n\\n1. Line 32 - You note that \\\"outputting strings from a predefined distribution that satisfies the adversarial predicate (as prior methods do) is not sufficient.\\\" As I stated before, the Prompt Matching method from Bailey et al. does not have this feature. You have convinced me, however, that your method more directly achieves the prompt injection objective, however I still believe it is not correct to say \\\"But injection attacks in non-text modalities are a new, yet-to-be-explored area of LLM safety research.\\\" Softening this language seems fairly easy. For example saying it has not been the focus of prior work, and your results are far more expansive in this area (which I believe they are).\\n2. Line 83 - I agree with you response here. You state in your response \\\"Dong et al. indeed demonstrate adversarial perturbations that produce contextually coherent outputs\\\". I think this means you agree that your original quote on line 83 of the paper \\\"By design, jailbreaking and adversarial examples produce contextually incoherent outputs that do not actually answer users\\u2019 questions about images.\\\" is incorrect and should be changed.\\n3. Line 84 - from your response I now understand what you mean by stealthiness. I would still ask the quote to be changed to draw more direct attention to stealthiness as contextually coherent, as opposed to norm constrained.\\n\\n### Additional experiments\\n\\nThank you for running the requested additional experiments. From what I can tell:\\n1. Transfer to GPT-4o is weak. This is totally fine and I think it would be good to report this in the paper. Possibly explanations could include unkown image preprocessing for GPT-4o? Please let me know if this reading of the results is correct (to reiterate, I am not concerned about this being a somewhat negative result, this is good for the community to know).\\n2. You found it is possible to make your attacks more robust to defenses\\n\\n### Summary\\n\\nI thank the authors for their detailed response and taking the time to run additional experiments. To summarize, I am willing to improve my score if there are appropriate changes in language that we have discussed, and the above new results are included somewhere in the paper. It would be most compelling to see the actual changes in the uploaded version of the paper, but of course this may not be possible before the deadline. Please let me know what you can / intend to change in the submitted PDF?\"}",
"{\"comment\": \"Thank you for your reply.\\n\\n1.Regarding weakness 1, while the MSCOCO results are appreciated, your method might not perform as effectively as text prompts in more complex scenarios. Besides, a more comprehensive evaluation of different meta-objectives is needed.\\n\\n2.For weakness 2, I do not think the response fully addresses the issue. You may need to clarify how prompt diversity is ensured in your experiments.\\n\\n3.Concerning weakness 3, since your method uses noise-based attacks, testing against more advanced test-time defenses is essential for a thorough evaluation.\"}",
"{\"comment\": \"Thank you for your review.\", \"weakness\": \"1. Comparison with Shadowcast (Training-time vs. Test-time Attacks): Shadowcast is a training-time attack that assumes the attacker has the capability to poison the model during its training phase. In contrast, our attack operates entirely at evaluation time on the original, unmodified, and unpoisoned model. It does not require any modifications to the training data or any access to the model during training. \\n2. Image Semantics Preservation (Section 5.3): We used cosine similarity and structural similarity to measure the change in visual semantics between clean and perturbed images. Low similarity indicates substantial visual differences, which serves as an indirect measure of semantic preservation. We also performed a comprehensive evaluation of text alignment between the VLM\\u2019s outputs and the original image contexts (as reported in Table 3, Section 5.3). The metric and evaluation process are detailed in Section 5.1, which focuses on the preservation of image semantics. We will improve the discussion to explicitly link the metrics with the semantic preservation objectives of our paper.\", \"questions\": \"1. Random Resizing and Cropping Evaluation: We appreciate the suggestion to evaluate our method under common preprocessing steps like random resizing and cropping. Due to space constraints, we focused on other defenses such as feature distillation and anomaly detection, which are discussed in Appendix C. We will consider including evaluations involving resizing and cropping to demonstrate robustness under these transformations.\\n2. Failure Cases: In practice, we found that some instructions, such as \\\"talk sarcastically\\\" or \\\"talk funnily,\\\" were challenging to achieve, either due to model limitations or difficulties in evaluation. This is not a limitation of our method but rather a limitation of the target models\\u2019 instruction-following capabilities. There were no instructions that were followed if given via an explicit text prompt but failed when using image soft prompts.\\n3. Cosine Similarity in Section 5.3: Addressed in the response to Weakness(2).\\n4. Textual Output Consistency with Input Images: Addressed in the response to Weakness(2).\"}",
"{\"summary\": \"The paper introduces a new type of attack on visual language models. These attacks, termed meta-instruction attacks, involve subtle image perturbations that act as soft prompts to influence how a model interprets images and responds to queries. The idea is to steer the model\\u2019s outputs to satisfy adversary-chosen objectives, such as a specific sentiment, style, or political bias, without the user being aware of the manipulation. The authors demonstrate the effectiveness of this approach across various visual language models, showing that these perturbations often outperform explicit instructions and are transferable across models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The concept of embedding hidden meta-instructions within images offers a new approach to prompt injection for multi-modal models, highlighting a potential vulnerability not extensively covered in existing literature.\\n\\n2. It is interesting to see how the method reveals hidden capabilities of instruction-tuned models. In some cases, the meta-instructions successfully steer the model's outputs in ways that explicit instructions fail to achieve.\\n\\n3. The study provides an empirical evaluation on a range of meta-objectives (e.g., sentiment, language, and political bias), demonstrating the effectiveness of the attack method.\", \"weaknesses\": \"1. The paper's reliance on just five images from a single dataset, ImageNet, limits the robustness and generalizability of its evaluation. ImageNet, which is primarily focused on object recognition, may not adequately represent the diversity and complexity of images encountered in real-world scenarios. Incorporating evaluations on datasets with more varied and complex scenes, such as MSCOCO, would provide a more comprehensive assessment of performance.\\n\\n2. The paper simulates user interaction by generating questions to test meta-instructions, but it provides limited clarity on whether these questions adequately cover a broad range of natural user queries. Limited prompt diversity may affect the robustness of the attack if VLMs encounter different prompts in real-world scenarios.\\n\\n3. Since the meta-instruction is added as noise to the image, the paper does not demonstrate the effectiveness of meta-instructions against recent inference-time defense methods like DISCO[1], DiffPure[2], and IRAD[3]. This could be valuable for understanding its performance in the context of contemporary robustness strategies.\\n\\n[1] DISCO: Adversarial Defense with Local Implicit Functions.\\n[2] Diffusion models for adversarial purification.\\n[3] IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Meta comment addressing the novelty concerns\", \"comment\": \"Our novelty is a broader set of adversarial objectives, for which previous methods (based on forcing a particular output distribution on the model) are inadequate. We updated Figure 2 in the PDF with a concrete illustration that distinguishes our results from previous work (and will add more illustrations). Prior methods force the model to output from a specific, query-independent distribution, which either does not correctly respond to user queries and is thus not coherent in the conversation context, or produces harmful outputs irrelevant to the image.\\n \\nOur threat model is different, too. Because we focus on VLM users as victims of adversarial images, it is important that the model produces plausible responses to victims\\u2019 queries that match the visual content of the image. Prior work focused on VLM users as attackers, ie, creators of adversarial images that aim to evade system prompts, safety alignment, etc. Coherent, plausible, image-based responses to user prompts are thus not important (and not achieved) in prior work.\\n\\nPrior methods aim to fix the output distribution of the VLM before user queries are known (this is an explicit training objective in Bailey et al). This works for jailbreaking, where the user is the attacker and the goal is to output any harmful text, even if not related to the inputs. This does not work for other instructions and threat models (where VLM users are victims, not attackers). For example, for the \\u201ctalk positive\\u201d instruction, it results in the model always outputting \\u201cThis is excellent!\\u201d and similar positive strings regardless of what the user actually asked. In contrast, our images cause the VLM to output an actual positive interpretation of the image based on the specific user query.\\n\\nEven for jailbreaking, prior methods produce responses that *either* correctly respond to queries but are not toxic (and thus do not satisfy the adversary\\u2019s objective), *or* jailbreaking responses that are not based on the actual prompts and do not correspond to the visual content of the image.\"}",
"{\"comment\": \"Thanks for the response. I am wondering if there is any updates on the manuscripts?\"}",
"{\"comment\": \"Thank you for your reply. We have revised the discussion of related work and the framing of our contributions, please see the meta comment for more details.\\n \\nComparison with the prompt matching method of Bailey et al.:\\n\\nIt is difficult to perform a comprehensive comparison because there is no implementation of prompt matching in the public repository of the Bailey et al. project. It appears from the brief description in the paper that the target of prompt-matching image generation is the logits computed by the victim model in response to the adversary\\u2019s text prompts. It is not clear how this method works for different adversarial objectives because experimental evaluation is limited to a single misinformation \\u201cfact\\u201d (which, by design, limits the model\\u2019s conversational ability via \\u201cIgnore all previous instructions\\u201d and trains only on queries about that specific fact).\\n\\nIn our case, the target of image generation is a dataset of query-dependent text strings produced by another model. This enables our images to induce outputs that are never produced by the victim model in response to text prompts (see \\u201cunlocking\\u201d capabilities in our paper). This is impossible with the prompt-matching method of Bailey at al. because it uses only the victim model\\u2019s responses to text prompts.\\n\\nFurthermore, we ensure that the targets of image generation actually satisfy higher-level objectives such as \\u201ctalk positive\\u201d or \\u201ctalk with a Republican bias\\u201d, not simply that they match responses to the adversary\\u2019s text prompt.\\nThis enables our images to induce a wide range of different outputs (this is necessary to maintain conversational coherence and respond appropriately to users\\u2019 queries) while satisfying the adversary-chosen predicate.\"}",
"{\"comment\": \"Thank you for your review.\\n\\n1. L33: We will soften this claim. We are doing injection attacks for arbitrary adversarial objectives while preserving the model\\u2019s conversational capability, ie, the model's outputs (1) correctly respond to users\\u2019 queries about the images, and simultaneously (2) satisfy an adversary-chosen predicate. Because of (1), outputting strings from a predefined distribution that satisfies the adversarial predicate (as prior methods do) is not sufficient.\\n\\n2. L81: Fig. 6 shows our threat model: adversarial images (soft prompts) are crafted by attackers and shared online. Users who ask VLMs about these images are the victims, they do *not* create adversarial images.\\n\\n Because we focus on VLM users as victims of adversarial images, it is important that the model produce plausible responses to victims\\u2019 queries that match the visual content of the image. Prior work focused on VLM users as attackers, ie, creators of adversarial images that aim to evade system prompts, safety alignment, etc. Coherent, plausible, image-based responses to user prompts are thus not important (and not achieved) in prior work.\\n\\n3. Table 4: We did not investigate the specific reason behind LLaVA\\u2019s low transfer rate (0.1) for negative samples. This could be due to model architecture differences or contradictory system instructions. Nevertheless, all transfer rates exceed the no-attack baseline, indicating some degree of cross-model transferability.\\n\\n4. Not overriding system prompt: Our goal is *not* to override the system prompt (in contrast to jailbreaking attacks) but to perform comparably to an explicit user text prompt. The intended behavior is as if the victim himself prompted the model to follow the adversary\\u2019s instruction. \\n\\n The success rate for the \\\"talk negatively\\\" meta-instruction is similar to that of explicit text prompts; other meta-instructions achieved higher success rates. This suggests that effectiveness varies depending on the meta-instruction but overall, image soft prompts perform as good as explicit text prompts.\"}",
"{\"comment\": \"Thank you for your reply, we have updated the PDF to address reviewers\\u2019 questions and concerns. Please see the meta comment for details.\"}",
"{\"metareview\": \"2x borderline accept, 2x borderline reject. This paper proposes a test-time adversarial method that subtly modifies images with \\u201cmeta-instructions,\\u201d so large vision-language models can end up producing specific responses while still appearing to address the image content. The reviewers agree on the (1) straightforward and well-illustrated explanation of how these image-based prompts work, (2) clear writing style, and (3) evidence that the trick can transfer across several different models. However, they note (1) limited novelty compared to earlier work on image-based prompt injection, (2) practical doubts about how often users themselves control or produce the images under attack, and (3) unaddressed questions about what happens when a system prompt conflicts with the adversarial instruction. While the authors did provide follow-up clarifications on their threat model and added some experiments, they did not fully resolve all reviewer concerns, so the AC leans to not accept this paper.\", \"additional_comments_on_reviewer_discussion\": \"N/A\"}"
]
} |
1Xg4JPPxJ0 | Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data? | [
"Yutong Yin",
"Zhaoran Wang"
] | Humans exhibit remarkable compositional reasoning by integrating knowledge from various sources. For example, if someone learns ( B = f(A) ) from one source and ( C = g(B) ) from another, they can deduce ( C=g(B)=g(f(A)) ) even without encountering ( ABC ) together, showcasing the generalization ability of human intelligence. In this paper, we introduce a synthetic learning task, "FTCT" (Fragmented at Training, Chained at Testing), to validate the potential of Transformers in replicating this skill and interpret its inner mechanism. During training, data consist of separated knowledge fragments from an overall causal graph. In testing, Transformers must combine these fragments to infer complete causal traces. Our findings demonstrate that few-shot Chain-of-Thought prompting enables Transformers to perform compositional reasoning on FTCT by revealing correct combinations of fragments, even if such combinations were absent in training data. Furthermore, the emergence of compositional reasoning ability is strongly correlated with model complexity and training-testing data similarity. We propose, both theoretically and empirically, that Transformers learn an underlying generalizable program from training, enabling effective compositional reasoning during testing. | [
"Transformer; Chain-of-Thought; In-Context-Learning; Compositional Generalization"
] | Accept (Poster) | https://openreview.net/pdf?id=1Xg4JPPxJ0 | https://openreview.net/forum?id=1Xg4JPPxJ0 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ziVPGeJGJp",
"yUXOWjSO7O",
"tKIBghnPRL",
"sO1NJj8oDv",
"kxhP0n8z8B",
"h2voL2VjDx",
"h08MYLrUIa",
"fWscibbxGo",
"eoe3d48MFO",
"eVC6eHHwri",
"cHKJuqiz6L",
"bJy3XKyRX3",
"XTsGK0Y1mY",
"XD8sROr0OL",
"WNPcYhmCtj",
"UpVv0LZhRT",
"TDWYvesjy9",
"SFB9nijFGP",
"KOURtwU2St",
"K4EzQhcvIu",
"IqSV2kOMMX",
"GslrpV0twO",
"BiodVnNV9c",
"Af5C3juVhJ",
"9lidulnRgg",
"31VZpqZo1A",
"2gUa6OgOL2"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1732228142347,
1732513853923,
1732228646993,
1732231504414,
1732224691497,
1732221780573,
1730614650406,
1732226935551,
1732673770465,
1737523521586,
1732225176048,
1732226129407,
1732233364877,
1732226910005,
1732221992043,
1730716017633,
1732393805657,
1732226197210,
1732393452489,
1732315126189,
1732244832285,
1732678092553,
1734955500702,
1732559625599,
1730518216282,
1730487793013,
1732222163718
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_EQsG"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_EQsG"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_oHAw"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_n7TN"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_hQE3"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_n7TN"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Area_Chair_7pps"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_oHAw"
],
[
"ICLR.cc/2025/Conference/Submission2678/Reviewer_hQE3"
],
[
"ICLR.cc/2025/Conference/Submission2678/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Common Problems\", \"comment\": \"We summarize common problems concerned by multiple reviewers and our responses to them.\\n\\n**Generality of Our Conclusions**\\n\\nConcerns have been raised by reviewers n7TN, EQsG and hQE3 regarding whether our conclusions, derived from training randomly initialized transformers on a singular synthetic task, are sufficiently generalizable to realistic natural language scenarios. Our response is as follows:\\n\\nOur primary focus and contribution do not center on evaluating the performance of pre-trained models on natural language tasks. Numerous empirical studies have already demonstrated that well-pretrained language models exhibit compositional reasoning on complex tasks such as question answering, mathematical reasoning, and interdisciplinary content generation ([1], [2], [3], [4]). These models generate comprehensive content with elements that rarely co-occur in training data. Our objective is to scientifically validate this capability in a clear setting and explore its underlying mechanisms. To achieve this, it is essential to conduct controlled and tunable experiments where reasoning paths in the test data are distinct from those in the training data and not pre-encoded in the models' weights. In comparison to naturalistic tasks that train and test on intractable natural language corpora, a controllable synthetic task is more suited to our research objectives. Moreover, using synthetic data is a widely employed approach in previous interpretability studies ([5], [6], [7]).\\n\\nTo clarify our research purpose and contributions, we have made modifications outlined in the **Clarifications of the Research Purpose and Contribution** part in next comment.\\n\\nWhile comprehensive scientific interpretability studies on large language models trained on intricate natural corpora are indeed valuable, they remain challenging amidst current research advancements. We hope our research contributes effectively to this ultimate goal.\\n\\n**Clarity of the Notations**\\n\\nReviewers n7TN, oHAw and hQE3 have noted that the clarity of our paper, particularly the notations in Section 3 about introducing the FTCT dataset, should be improved. To enhance the clarity of our presentation, we have made the modifications detailed in the **Notation Simplification and Presentation Improvement** part in next comment.\\n\\n[1] Press, Ofir, et al. \\\"Measuring and narrowing the compositionality gap in language models.\\\"\\u00a0*arXiv preprint arXiv:2210.03350*\\u00a0(2022).\\n\\n[2] Zhou, Denny, et al. \\\"Least-to-most prompting enables complex reasoning in large language models.\\\"\\u00a0*arXiv preprint arXiv:2205.10625*\\u00a0(2022).\\n\\n[3] Khot, Tushar, et al. \\\"Decomposed prompting: A modular approach for solving complex tasks.\\\"\\u00a0*arXiv preprint arXiv:2210.02406*\\u00a0(2022).\\n\\n[4] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\"\\u00a0*arXiv preprint arXiv:2303.12712*\\u00a0(2023).\\n\\n[5] Chan, Stephanie, et al. \\\"Data distributional properties drive emergent in-context learning in transformers.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a035 (2022): 18878-18891.\\n\\n[6] Allen-Zhu, Zeyuan, and Yuanzhi Li. \\\"Physics of language models: Part 1, context-free grammar.\\\"\\u00a0*arXiv preprint arXiv:2305.13673*\\u00a0(2023).\\n\\n[7] Bietti, Alberto, et al. \\\"Birth of a transformer: A memory viewpoint.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\"}",
"{\"comment\": \"Thanks for the response. I would like to maintain my score and lean towards accepting the paper.\"}",
"{\"title\": \"Main Modifications\", \"comment\": [\"We summarize the main modifications made in the revised version.\", \"**Clarifications of the Research Purpose and Contribution**\", \"Refine the research purpose in the introduction: \\u201cThis paper validates the potential of Transformers in doing compositional reasoning on synthetic dataset and investigates the inner mechanism eliciting such ability. \\u201d [Lines 45-46].\", \"Pointing out that utilizing models trained on vast natural language is not proper for our research purpose: \\u201cHowever, the complexity and ambiguity of their natural language training and testing data make it hard to scientifically validate the compositional reasoning ability and explore the underlying mechanisms.\\u201d [Lines 42-44].\", \"Reframe the statement about relative knowledge ratio within the scope of FTCT: \\u201ccompositional reasoning emerges as the similarity between training and testing data increases. In FTCT, this is measured by the relative knowledge ratio\\u201d [Lines 69-70].\", \"Change the title of section 4.2: \\u201cThe Similarity between Training and Testing Data Determines the Emergence of Compositional Reasoning\\u201d [Lines 310-312].\", \"Adjust the conclusion: \\u201cOur research validates the potential of Transformers in doing compositional reasoning on synthetic data and investigates the inner mechanism eliciting such ability. \\u201d[Lines 527-528].\", \"**Notification Simplification and Presentation Improvement**\", \"Notations in Section 3:\", \"In Step 1.1 \\\"Fragmented at Training\\\", we provide a detailed example of the training sequence [A, 100, Z, 3, B, 101, H, 1] along with its generation process [Lines 201-202].\", \"In Step 2 \\u201cFew-shot learning\\u201d, the index for few-shot examples has been changed from (f) to (k), a more standard notation indicating shot numbers [Lins 208-213].\", \"In Step 3, we change the \\\"downside processing\\\" in the original version to \\\"downstream processing\\\" which is a more conventional terminology [Lines 216].\", \"In Step 3 \\\"Downstream Processing\\\", extraneous notation details have been removed. A concrete example is used to clarify the processed sentence: \\\"H=?: ... A=110, ... Z=1, ... B=111, ... H=5 \\\\n H=?: ... A=102, ... Z=4, ... B=103, ... H=9\\\" [Lines 218-221].\", \"Notations in Section 5:\", \"We replaced the abstract notations (o^\\\\cm) and (o^\\\\eq) with the actual symbols, using a comma \\u201c,\\u201d and an equals sign \\u201c=\\u201d [Lines 393-394, Lines 396-397].\", \"Notations in Section 6.1:\", \"Similarly, we revised notations (o^\\\\cm) and (o^\\\\eq) to their respective symbols \\u201c,\\u201d and \\u201c=\\u201d. We also emphasized comma tokens at different positions by enclosing them in boxes and using various colors for improved clarity [Lines 490-491].\", \"The demonstration of Figure 1:\", \"The values of \\u201cC\\u201ds at the lower right corner of Figure 1 have been changed to 108 and 105 respectively [Lines 170-173] .\", \"A self-explanatory caption of Figure 1 has been added in the newly submitted revised version [Lines 176-183].\", \"We distinguish knowledge points and noisy nodes with gray and blue only at the beginning of the \\u201cFragmented at Training\\u201d stage and remove all subsequent highlights to avoid confusion [Lines 163-166].\", \"Other improvements\", \"We add the formal definition of composition reasoning in the introduction [Lines 34-37].\", \"We change the description of the ABC example in the abstract to indicate our focus on step-by-step reasoning skills [Lines 13-16].\", \"**More thorough and clear discussions in related works**\", \"In the \\u201cStep-by-step reasoning\\u201d part (formerly titled \\u201cChain-of-Thought Prompting\\u201d), we have expanded on how Prystawski et al.'s work has informed our research and delineated the fundamental differences between our studies [Lines 107-111].\", \"In the \\u201cCompositional generalization\\u201d part, we have clarified the motivation and innovations of our FTCT tasks compared to existing works, focusing on step-by-step reasoning, data complexity, and the unique structure that facilitates detailed mechanism analysis [Lines 124-141].\", \"**Relaxation of Evaluation Criteria for Compositional Reasoning Ability**\", \"In the revised version, we have adjusted the whole chain accuracy standard to assess if the model's output includes all vertices and values in the correct order [Lines 255-258]. Compared to the original version that requires vertices and values to be presented consecutively, this revised approach considers additional valid reasoning paths that are disturbed by noise.\", \"We have updated the testing results using these revised criteria in the new version (Figure 2 [Lines 289-302] and Figure 3 [Lines 1358-1376]). The overall pattern remains consistent with the original version which does not affect our conclusions or contributions.\"]}",
"{\"title\": \"Response for Reviewer hQE3 (PART 4)\", \"comment\": \"[1] Prystawski, Ben, Michael Li, and Noah Goodman. \\\"Why think step by step? Reasoning emerges from the locality of experience.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[2] Press, Ofir, et al. \\\"Measuring and narrowing the compositionality gap in language models.\\\"\\u00a0*arXiv preprint arXiv:2210.03350*\\u00a0(2022).\\n\\n[3] Zhou, Denny, et al. \\\"Least-to-most prompting enables complex reasoning in large language models.\\\"\\u00a0*arXiv preprint arXiv:2205.10625*\\u00a0(2022).\\n\\n[4] Khot, Tushar, et al. \\\"Decomposed prompting: A modular approach for solving complex tasks.\\\"\\u00a0*arXiv preprint arXiv:2210.02406*\\u00a0(2022).\\n\\n[5] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\"\\u00a0*arXiv preprint arXiv:2303.12712*\\u00a0(2023).\\n\\n[6] Chan, Stephanie, et al. \\\"Data distributional properties drive emergent in-context learning in transformers.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a035 (2022): 18878-18891.\\n\\n[7] Allen-Zhu, Zeyuan, and Yuanzhi Li. \\\"Physics of language models: Part 1, context-free grammar.\\\"\\u00a0*arXiv preprint arXiv:2305.13673*\\u00a0(2023).\\n\\n[8] Bietti, Alberto, et al. \\\"Birth of a transformer: A memory viewpoint.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[9] Garg, Shivam, et al. \\\"What can transformers learn in-context? a case study of simple function classes.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a035 (2022): 30583-30598.\\n\\n[10] Xie, Sang Michael, et al. \\\"An explanation of in-context learning as implicit bayesian inference.\\\"\\u00a0*arXiv preprint arXiv:2111.02080*\\u00a0(2021).\\n\\n[11] Zhang, Honghua, et al. \\\"On the paradox of learning to reason from data.\\\"\\u00a0*arXiv preprint arXiv:2205.11502*\\u00a0(2022).\\n\\n[12] Agarwal, Rishabh, et al. \\\"Many-shot in-context learning.\\\"\\u00a0*arXiv preprint arXiv:2404.11018*\\u00a0(2024).\\n\\n[13] Feng, Guhao, et al. \\\"Towards revealing the mystery behind chain of thought: a theoretical perspective.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[14] Li, Zhiyuan, et al. \\\"Chain of thought empowers transformers to solve inherently serial problems.\\\"\\u00a0*arXiv preprint arXiv:2402.12875*\\u00a0(2024).\"}",
"{\"title\": \"Response for Reviewer oHAw (PART 1)\", \"comment\": \"Thank you for the thorough and detailed review! Especially for the novel angle in adding extra experiment and careful reading of technical details. We address your concerns as follows.\\n\\n**W1:** The clarity of the paper is lacking, especially the notation and writing. There is a typo in Figure 1. Concrete examples are needed to improve readers\\u2019 understanding\\n\\n**A for W1:**\\n As for the clarity and concrete example, we do the following modification to make the paper more clear and readable:\\n- Notations in Section 3:\\n - In Step 1.1 \\\"Fragmented at Training\\\", we provide a concrete example of the training sequence [A, 100, Z, 3, B, 101, H, 1], where [A, B] is the child chain from the causal structure with values following the ``+1\\\" operation, and [Z, H] are noise with randomly assigned values [Lines 200-202].\\n - In Step 3 \\\"Downstream Processing\\\", extraneous notation details have been removed. A concrete example is used to clarify the processed sentence: \\\"H=?: ... A=110, ... Z=1, ... B=111, ... H=5 \\\\n H=?: ... A=102, ... Z=4, ... B=103, ... H=9\\\" [Lines 218-221]. An example of the sentence with complete context information is like \\u201cH=?: ARCH Rewards patentsA=102, Pad Influ interfZ=4, google correl latexB=103, treasureFrankH=9\\u201d. Such concrete examples are shown in the Appendix C in the revised version [Lines 832-896].\\n- Notations in Section 5:\\n - We replaced the abstract notations $o^{\\\\texttt{cm}}$ and $o^{\\\\texttt{eq}}$ with the actual symbols, using a comma \\u201c,\\u201d and an equals sign \\u201c=\\u201d [Lines 393-394, Lines 396-397].\\n- Notations in Section 6.1:\\n - Similarly, we revised notations $o^{\\\\texttt{cm}}$ and $o^{\\\\texttt{eq}}$ to their respective symbols \\u201c,\\u201d and \\u201c=\\u201d. We also emphasized comma tokens at different positions by enclosing them in boxes and using various colors for improved clarity [Lines 490-491].\\n\\nThere is a typo in Figure 1. We believe that this corresponds to your second question \\u201cwhere B=106 and B=103, should it be C=108 and C=105, respectively?\\u201d. The answer is yes. The \\u201cC=109\\u201d and \\u201cC=106\\u201d at the lower right corner of Figure 1 should be changed to \\u201cC=108\\u201d and \\u201cC=105\\u201d according to the relationship q(C)=q(B)+2 defined by the causal structure. We sincerely appreciate your attention to detail in identifying this typo. A corrected version has now been uploaded [Lines 170-173]. \\n\\n\\n**W2:** The formal definition of the compositional reasoning should be explicitly written out.\\n\\n**A for W2:** \\nWe add such formal definition explicitly in the introduction section of the revised version [Lines 34-37]. Specifically, we define the compositional reasoning as \\u201cthe skill to integrate discrete pieces of knowledge from multiple sources to form a coherent reasoning, even in the absence of explicit examples connecting these pieces during learning.\\u201d\\n\\n\\n**W3:** Lack of the testing performance of reasoning with incomplete intermediate reasoning steps (reasoning about AC).\\n\\n**A for W3:** \\nOur interpretation of \\\"reasoning about AC\\\" refers to reasoning when not all vertices in the reasoning paths are adjacent in the causal structure\\u2014essentially the ability to reason with incomplete intermediate steps. \\n\\nIt is true that \\\"reasoning about AC\\\" is an aspect of compositional reasoning. **However, our research focus, which is validating compositional reasoning ability and doing mechanism analysis , does not necessarily require testing such performance.** A model is deemed to possess compositional reasoning ability as long as it can correctly determine the value of the last vertex in an OOD testing sentence, regardless of how that value is generated. In this study, we choose to elicit Transformers\\u2019 compositional reasoning ability by training them for complete step-by-step reasoning, which does not inherently ensure proficiency in handling incomplete intermediate steps. To avoid potential misunderstandings, we have revised the abstract example to: \\u201cFor example, if someone learns ( B = f(A) ) from one source and ( C = g(B) ) from another, they can deduce the value of ( C ) by reasoning ( C=g(B)=g(f(A)) ) even without encountering ( ABC ) together\\u201d [Lines 13-16].\\n\\nAdditionally, we have included an experiment on reasoning with incomplete intermediate steps (reasoning about AC) in Appendix M [Lines 1722-1744]. **The results in Figure 11 [Lines 1782-1835] indicate that Transformers trained on FTCT exhibit limited proficiency when reasoning with any level of incompleteness.** This limitation likely stems from a bias in the training dataset, where adjacent vertices consistently appear consecutively in the sequences.\"}",
"{\"title\": \"Response for Reviewer hQE3 (PART 1)\", \"comment\": [\"Thank you for the thorough reviewing and constructive advice! We address your concerns as follows.\", \"**W1:** More prominent attribution should be given to the prior work Prystawski et al. [1].\", \"**A for W1:** To clarify how Prystawski et al.'s work has influenced our research and to outline the fundamental differences between our studies, we have revised the original discussion of their work in the related works section [Lines 107-111].\", \"We clarify that our FTCT data construction is informed by their task's framework: \\u201cOur FTCT structure draws inspiration from their Bayesian networks\\u201d [Lines 107-108].\", \"We specify the structural differences, noting that we introduce additional elements and complicate the dependency: \\u201cadditionally inserting contextual noise and complicating value relationships\\u201d [Line 108].\", \"Furthermore, we underscore the distinct differences in our research goals and contributions: \\u201cWhile they focus on locality structure's impact on CoT efficacy, we investigate how various training factors influence the emergence of compositional reasoning and conduct an in-depth analysis of the mechanisms within Transformer structures that elicit this capability.\\u201d[Lines 109-111].\", \"**W2:** The generality of our conclusions made by training highly specific model structures on synthetic data is questionable. Analysis on more naturalistic reasoning benchmarks is needed.\", \"**A for W2:** We would like to state that our main focus and contribution are not about investigating the performance of pre-trained models on natural language tasks. In fact, many empirical works have already demonstrated the symptom that well-pretrained language models do compositional reasoning on complex realistic tasks like question answering, mathematical reasoning and interdisciplinary generation ([2], [3], [4] [5]). They are shown to generate comprehensive content with elements unlikely to co-occur in training data. **We aim to scientifically validate such ability within a clear environment and explore its underlying mechanism.** To this end, it is necessary to conduct controlled and tunable experiments where reasoning paths in the test data neither appear in the training data nor are they encoded in the models' pre-existing weights. Compared with naturalistic tasks with intractable natural language corpus, we believe that a controllable synthetic task is more appropriate for our research purpose. Doing research on synthetic data is also an approach being widely used by previous interpretability studies ([6], [7], [8]).\", \"To clarify the above argument, we attach a new version of the paper that includes following modifications:\", \"Refine the research purpose in the introduction: \\u201cThis paper validates the potential of Transformers in doing compositional reasoning on synthetic dataset and investigates the inner mechanism eliciting such ability. \\u201d [Lines 45-46].\", \"Pointing out that utilizing models trained on vast natural language is not proper for our research purpose [Lines 42-44].\", \"Reframe the statement about relative knowledge ratio within the scope of FTCT: \\u201ccompositional reasoning emerges as the similarity between training and testing data increases. In FTCT, this is measured by the relative knowledge ratio\\u201d [Lines 69-70].\", \"Adjust the conclusion: \\u201cOur research validates the potential of Transformers in doing compositional reasoning on synthetic data and investigates the inner mechanism eliciting such ability. \\u201d[Lines 527-528].\", \"Admittedly, conducting comprehensive scientific interpretability studies on large language models trained on complex natural corpora is meaningful. However, such endeavors remain challenging amidst current research advancements. We hope our research makes a contribution to this ultimate goal.\", \"**W3:** Too many technical details in the Section 3 makes it hard to get the core idea. Problems of Figure 1. Other comprehension/clarity issues in Section 3.\", \"**A for W3:**\", \"Issues with Figure 1:\", \"A self-explanatory caption has been added in the newly submitted revised version [Lines 176-183].\", \"To prevent the distraction caused by blue highlight, in the revised version we distinguish knowledge points and noisy nodes with gray and blue only at the beginning of the \\u201cFragmented at Training\\u201d stage and remove all subsequent highlights to avoid confusion [Lines 163-166].\"]}",
"{\"summary\": \"This paper introduces the \\\"FTCT\\\" (Fragmented at Training, Chained at Testing) task to evaluate if Transformers can perform compositional reasoning similar to humans. The task involves training models on separate knowledge fragments and testing them on integrating these fragments to form complete causal graph traces. The study finds that few-shot Chain-of-Thought prompting helps Transformers combine these fragments correctly, even without seeing such combinations during training. The results indicate that model complexity and the data's knowledge ratio play a role in enabling this skill. The authors provide theoretical and empirical evidence for their claims, showing that Transformers can learn a generalizable program to aid in compositional reasoning. The findings are interesting and suggest potential areas for further exploration.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The design of the FTCT task is well-conceived, as it effectively mimics real-world scenarios where knowledge is often fragmented and must be integrated to draw comprehensive conclusions. This setup provides a meaningful and practical benchmark to evaluate the compositional reasoning abilities of Transformers, making the study relevant and valuable for advancing our understanding of machine learning models' capabilities.\", \"Chapter 5, \\\"transformer does compositional reasoning via the underlying program\\\", is very interesting as it explores the possible underlying mechanisms and principles that allow Transformers to perform compositional generalization. This chapter goes beyond just presenting empirical results by looking into how these models might internally handle and integrate fragmented knowledge. This deeper investigation adds value by giving us a better understanding of how Transformers achieve complex reasoning tasks.\"], \"weaknesses\": [\"While the task studied in this paper requires strong compositional generalization abilities, it is simple and singular in its form. Generally, using a simple and singular synthetic dataset is suitable for highlighting the shortcomings of the Transformer architecture. However, since the paper concludes that Transformers possess this capability, the experiments on this task alone are not sufficient to support such a conclusion. I believe that more diverse and comprehensive tasks are needed, and ideally, this capability should also be validated on complex real-world tasks.\", \"In the related work section, the paper discusses various tasks used to probe compositional generalization abilities. The authors mention that these existing tasks have not been studied from the perspectives of few-shot prompting and chain-of-thought reasoning. However, this distinction alone is insufficient; if the difference is merely in this aspect, it would be possible to modify existing datasets instead of creating a new one. The novelty of this newly created task is not demonstrated well. Therefore, the authors need to provide more explanation regarding the motivation and innovation behind the proposed task.\", \"The experiments in this paper use Transformers with relatively small parameter sizes. It is unclear whether the conclusions drawn from these experiments would hold true for larger Transformer models. This limitation raises questions about the generalizability of the findings to more complex and sizable architectures.\"], \"questions\": \"My main concerns have already been expressed in the \\\"weakness\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response for Reviewer n7TN (PART 2)\", \"comment\": \"[1] Press, Ofir, et al. \\\"Measuring and narrowing the compositionality gap in language models.\\\"\\u00a0*arXiv preprint arXiv:2210.03350*\\u00a0(2022).\\n\\n[2] Zhou, Denny, et al. \\\"Least-to-most prompting enables complex reasoning in large language models.\\\"\\u00a0*arXiv preprint arXiv:2205.10625*\\u00a0(2022).\\n\\n[3] Khot, Tushar, et al. \\\"Decomposed prompting: A modular approach for solving complex tasks.\\\"\\u00a0*arXiv preprint arXiv:2210.02406*\\u00a0(2022).\\n\\n[4] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\"\\u00a0*arXiv preprint arXiv:2303.12712*\\u00a0(2023).\\n\\n[5] Chan, Stephanie, et al. \\\"Data distributional properties drive emergent in-context learning in transformers.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a035 (2022): 18878-18891.\\n\\n[6] Allen-Zhu, Zeyuan, and Yuanzhi Li. \\\"Physics of language models: Part 1, context-free grammar.\\\"\\u00a0*arXiv preprint arXiv:2305.13673*\\u00a0(2023).\\n\\n[7] Bietti, Alberto, et al. \\\"Birth of a transformer: A memory viewpoint.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\"}",
"{\"comment\": \"Thank you for your explanation and updates to the paper! I lean towards accepting this paper and maintain my score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response for Reviewer oHAw (PART 2)\", \"comment\": \"**Q1:** Will the model be able to chain together the values of non-adjacent vertices during testing time?\\n\\n**A for Q1:** We've included testing performance for reasoning with incomplete intermediate steps with non-adjacent vertices (reasoning about AC) in the Appendix M [Lines 1722-1744]. The testing results reveal that Transformers trained on FTCT are unable to effectively chain together the values of non-adjacent vertices during testing. This limitation is likely due to a bias in the training dataset, where adjacent vertices invariably appear consecutively in the training sequences. Nevertheless, as elaborated in our response to **W3**, this limitation does not affect the main conclusions presented in this paper.\\n\\n\\n**Q2:** Is there a typo in Figure 1?\\n\\n**A for Q2:** There is a typo in Figure 1, and the values of \\u201cC\\u201ds should be 108 and 105 respectively. We have submitted a correct figure in the revised version [Lines 170-173].\\n\\n\\n**Q3:** Are there more than one set of causal chains? It seems that there is only one sequence of length $n$.\\n\\n**A for Q3:** Indeed, there is only one set of causal chains, and all different chains with various vertices and lengths are encompassed within this set. As defined in Section 3.1 [Line 187], the set of chains $\\\\mathcal{T}(\\\\mathcal{G})$ includes all sequences of connected vertices. The index $n$ can be any natural number as long as that $[v_1, ..., v_n]$ are connected by edges. For the causal structure shown in Figure 1 [Lines 162-174], $\\\\mathcal{T}(\\\\mathcal{G})$ includes sequences [A, B], [B, C], [A, D], [D, E], [A, B, C], and [A, D, E].\\n\\n\\n**Q4:** Why are the noise vertices inserted in a predictable manner?\\n\\n**A for Q4:** We interpret \\\"predictable manner\\\" to mean that within a few-shot example, the noise vertices retain the same letters in the same positions across shots. This design is intended to evoke the model's generalized in-context learning ability by encouraging it to replicate the same vertices from prior examples, thereby ensuring high vertex accuracy during testing. While the appearance of noise vertices is predictable, it does not diminish the significance of the observed emergence of compositional reasoning.\\n\\n\\n**Q5:** Why is a relative knowledge ratio of 0.3 the threshold of the emergence of compositional reasoning? Could it be that 0.3 is when the probability that there is at least one occurrence for every $(v_i, v_{i+1})$ in the train set reaches close to 1?\\n\\n**A for Q5:** Our analysis shows that occurrence probability is not the determining factor for the 0.3 threshold. For every tuple of adjacent vertices and their values $(v_i, q_i, v_{i+1}, \\\\texttt{op}(v_i, v_{i+1})\\\\circ q_i)$, we count the number of times that it has been visited in our training data. Results indicate that every tuple is encountered at least 46 times, across tasks with varying knowledge ratios. The large volume of our training data ensures that, even with a low relative knowledge ratio, the probability of each $(v_i, q_i, v_{i+1}, \\\\texttt{op}(v_i, v_{i+1}) \\\\circ q_i)$ appearing in the training data is nearly 1. **Detailed experiments results are available in the Appendix I [Lines 1452-1500] of the revised paper.**\\n\\nWhat we can confirm so far is that a higher relative knowledge ratio aligns few-shot CoT prompts more closely with training data, thereby enhancing testing performance. Understanding the precise mechanism behind this specific threshold remains a future research direction.\\n\\n\\n**Q6:** Why is there a drop in performance when the relative knowledge ratio reaches 0.25?\\n\\n**A for Q6:** The relative knowledge ratio of 0.25 is associated with a training configuration where the graph depth is 8 and the child chain length is 2. Testing curves in Figure 2 (left) [Lines 289-308] and Figure 3 [Lines 1358-1376] show that every training task with a child chain length of 2 performs poorly. That is the reason why the performance drop abruptly at 0.25.\\n\\nOur empirical findings in Figure 2 (right) [Lines 289-308] indicate a strong correlation between compositional reasoning ability and the relative knowledge ratio; however, this does not imply that compositional reasoning ability increases monotonically with the relative knowledge ratio. Indeed, if we set the relative knowledge ratio to $(M/N)^\\\\alpha$, where $\\\\alpha$ is a tunable parameter, an $\\\\alpha$ can be found that results in an apparently monotonic curve.\"}",
"{\"title\": \"Response for Reviewer EQsG (PART 1)\", \"comment\": \"Thank you for your thorough review and valuable opinions! We address your concerns as follows.\\n\\n**W1:** More diverse, comprehensive and complex real-world tasks are needed to draw the conclusion that Transformers possess the compositional reasoning ability.\\n\\n**A for W1:** As for the need of real world tasks, we would like to clarify that our main focus and contribution are not about proving the ability of real pre-trained language models on natural language settings. In fact, many empirical works have already demonstrated the symptom that well-pretrained language models do compositional reasoning on complex realistic tasks like question answering, mathematical reasoning and interdisciplinary generation ([1], [2], [3] [4]). They are shown to generate comprehensive content with elements unlikely to co-occur in training data. **We aim to scientifically validate such ability within a clear environment and explore its underlying mechanism.** To this end, it is necessary to conduct controlled and tunable experiments where reasoning paths in the test data neither appear in the training data nor are they encoded in the models' pre-existing weights. Compared with realistic tasks with intractable natural language corpus, we believe that a controllable synthetic task is more appropriate for our research purpose. Doing research on synthetic data is also an approach being widely used in previous interpretability studies ([5], [6], [7]).\\n\\nAs for the diversity and comprehensiveness, we argue that the experiments on FTCT is enough for the potential validation and mechanism investigation research purposes. Nonetheless, we acknowledge that incorporating more diverse and comprehensive tasks will enhance the generalizability and impact of our work, which we aim to pursue in future research.\\n\\nTo clarify the above argument, we attach a new version of the paper that includes following modifications: \\n\\n- Refine the research purpose in the introduction: \\u201cThis paper validates the potential of Transformers in doing compositional reasoning on synthetic dataset and investigates the inner mechanism eliciting such ability. \\u201d [Lines 45-46].\\n- Pointing out that utilizing models trained on vast natural language is not proper for our research purpose: \\u201cHowever, the complexity and ambiguity of their natural language training and testing data make it hard to scientifically validate the compositional reasoning ability and explore the underlying mechanisms.\\u201d [Lines 42-44].\\n- Reframe the statement about relative knowledge ratio within the scope of FTCT: \\u201ccompositional reasoning emerges as the similarity between training and testing data increases. In FTCT, this is measured by the relative knowledge ratio\\u201d [Lines 69-70].\\n- Adjust the conclusion: \\u201cOur research validates the potential of Transformers in doing compositional reasoning on synthetic data and investigates the inner mechanism eliciting such ability. \\u201d[Lines 527-528].\\n\\n**W2:** More explanation about the motivation and innovation behind our task is needed to demonstrate the novelty and contribution of the FTCT dataset compared with existing tasks.\\n\\n**A for W2:** We rewrite the \\u201cCompositional generalization\\u201d part in related works section in the new submitted version [Lines 124-141], in which we clarify the motivation and innovation of our FTCT tasks compared with existing works. The main logic of the new version is as follows.\\n- Firstly, a series of works has showcased the potential of Transformers in compositional tasks where the answers are directly output without intermediate reasoning steps ([8], [9], [10], [11], [12]). In contrast, our FTCT dataset with deep causal structure allows exploration of explicit reasoning's impact on compositional generalization ability.\\n- Further, empirical studies show that step-by-step reasoning enhances large language models' compositional abilities on real-world tasks like question answering and mathematical reasoning ([1], [2], [3]). However, the complexity of natural language corpora used by them complicates scientific validation compared to our synthetic data. \\n- Recent studies have explored Transformers' generalized reasoning on controllable synthetic tasks ([13], [6], [14]). In contrast, our FTCT task not only ensures controlled experimentation but also introduces measures of training-testing data similarity and establishes a distinct parent-child causal relationship, facilitating analysis of the mechanisms underlying Transformers' compositional abilities concerning data distribution and model structure.\\n\\nWe believe that the above modifications will better demonstrate the novelty and contribution of our FTCT learning task.\"}",
"{\"title\": \"Extra Experiments\", \"comment\": \"During the rebuttal process, we conducted the following additional experiments to address reviewers\\u2019 concerns.\\n\\n**Testing Performance with Noisy Tokens**\\n\\nIn response to reviewer n7TN\\u2019s inquiry about the absence of noisy tokens in $D_{test}$, we included testing results where examples are blurred by noise in the same manner as the training data. These findings, presented in Appendix H.2 [Lines 1401-1451], reveal the same overall pattern as tests without noisy tokens, affirming the robustness of our conclusions.\\n \\n**Conclusions on Larger Transformer Models**\\n \\nTo address reviewer EQsG\\u2019s concern about the generalizability of findings from small Transformers to larger architectures, we evaluated GPT2-small (12 layers, 12 heads, 117M parameters) and GPT2-large (36 layers, 20 heads, 774M parameters) on the FTCT data, with results in Appendix J [Lines 1502-1611]. These models display a similar pattern, where compositional reasoning ability emerges with increased shot numbers and relative knowledge ratios. This suggests our findings are applicable to more complex architectures, although the performance of the larger models is less stable, likely due to overfitting.\\n \\n**Testing Performance with Incomplete intermediate Reasoning Steps**\\n \\nIn response to reviewer oHAw\\u2019s query about the model's ability to link values of non-adjacent vertices during testing, we included the testing performance for reasoning with incomplete steps in Appendix M [Lines 1722-1744]. Results indicate that Transformers trained on FTCT struggle to link non-adjacent vertices, likely due to a training dataset bias where adjacent vertices appear consecutively. \\n\\nThis limitation does not affect the main conclusions presented in this paper, as a model exhibiting robust compositional reasoning ability is not necessarily required to handle non-adjacent vertices directly within its test data. We can prove the compositional reasoning ability by models doing complete reasoning as long as the test data have not appeared in their entirety during training.\\n \\n**Least Visited Times for All Adjacent Vertices and Their Values**\\n \\nTo certify reviewer oHAw\\u2019s hypothesis about the reason why relative knowledge ratio of 0.3 is the threshold of the phase change, we count the times that each tuple $(v_i, q_i, v_{i+1}, \\\\texttt{op}(v_i, v_{i+1})\\\\circ q_i)$ is visited in Appendix I [Lines 1452-1500]. Results show that every tuple is visited at least 46 times, across tasks with varying knowledge ratios, indicating that occurrence probability is not the determining factor for the 0.3 threshold. \\n \\n**Performance of Transformers in reasoning causal chains with varying test lengths**\\n \\nTo address reviewer hQE3\\u2019s concern regarding the unexpected pattern where accuracy decreases with more than one few-shot prompt, we test the performance of Transformers in reasoning causal chains with varying test lengths in Appendix L [Lines 1613-1673]. The results (Figure 10 [Lines 1653-1673]) show that for tasks where test lengths are close to child chain lengths models are trained on, few-shot performance remains stable without decrease. As the gap between child chain and test lengths widens, the performance decrease after one shot becomes evident. Thus, we conclude that when differences between training and testing data are limited,\\u00a0the expected pattern of in-context learning appears, where performance improves with more shots and does not decline after reaching its peak. As the gap between testing causal chains and training child chains widens, the performance decrease after one shot becomes evident, indicating the influence brought by OOD tasks.\"}",
"{\"title\": \"Response for Reviewer n7TN (PART 1)\", \"comment\": \"Thank you for the thoughtful review and the valuable feedback! We address your concerns as follows.\\n\\n**W1:** It is questionable that to what extent the conclusions made by training randomly initialized transformers on synthetic data can be extended to real pre-trained language models.\\n\\n**A for W1:** We would like to clarify that our primary focus and contribution are not about investigating the performance of real pre-trained language models. As a matter of fact, many empirical works have already demonstrated that real pre-trained language models exhibit symptoms of compositional reasoning by generating comprehensive content with elements unlikely to co-occur in training data ([1], [2], [3], [4]). **We aim to scientifically validate such ability within a clear environment and explore its underlying mechanism.** To this end, it is necessary to conduct controlled and tunable experiments where reasoning paths in the test data neither appear in the training data nor are they encoded in the models' pre-existing weights. Compared with investigating real pre-trained LLMs that have learned intractable corpus, we believe that training randomly initialized Transformers on synthetic data is an appropriate method for our research purpose. Doing research on synthetic data is also an approach being widely used in previous interpretability studies ([5], [6], [7]).\\n\\nTo clarify the above points, we attach a new version of the paper that includes following modifications: \\n- Refine the research purpose in the introduction: \\u201cThis paper validates the potential of Transformers in doing compositional reasoning on synthetic dataset and investigates the inner mechanism eliciting such ability. \\u201d [Lines 45-46].\\n- Pointing out that utilizing models trained on vast natural language is not proper for our research purpose: \\u201cHowever, the complexity and ambiguity of their natural language training and testing data make it hard to scientifically validate the compositional reasoning ability and explore the underlying mechanisms.\\u201d [Lines 42-44].\\n- Reframe the statement about relative knowledge ratio within the scope of FTCT: \\u201ccompositional reasoning emerges as the similarity between training and testing data increases. In FTCT, this is measured by the relative knowledge ratio\\u201d [Lines 69-70].\\n- Adjust the conclusion: \\u201cOur research validates the potential of Transformers in doing compositional reasoning on synthetic data and investigates the inner mechanism eliciting such ability. \\u201d[Lines 527-528].\\n\\nWe acknowledge that conducting comprehensive scientific interpretability studies on large language models trained on complex natural corpora is indeed valuable. However, such endeavors remain challenging amidst current research advancements. We hope our research contributes meaningfully towards achieving this ultimate goal.\\n \\n**W2:** Complicated notations make the paper difficult to follow.\\n\\n**A for W2:** To simplify the notations and make the paper easier to follow, we have made the following changes:\\n- Notations in Section 3:\\n - In Step 1.1 \\\"Fragmented at Training\\\", we provide a detailed example of the training sequence [A, 100, Z, 3, B, 101, H, 1] along with its generation process [Lines 201-202].\\n - In Step 3 \\\"Downstream Processing\\\", extraneous notation details have been removed. A concrete example is used to clarify the processed sentence: \\\"H=?: ... A=110, ... Z=1, ... B=111, ... H=5 \\\\n H=?: ... A=102, ... Z=4, ... B=103, ... H=9\\\" [Lines 218-221].\\n- Notations in Section 5:\\n - We replaced the abstract notations $o^\\\\texttt{cm}$ and $o^\\\\texttt{eq}$ with the actual symbols, using a comma \\u201c,\\u201d and an equals sign \\u201c=\\u201d [Lines 393-394, Lines 396-397].\\n- Notations in Section 6.1:\\n - Similarly, we revised notations $o^\\\\texttt{cm}$ and $o^\\\\texttt{eq}$ to their respective symbols \\u201c,\\u201d and \\u201c=\\u201d. We also emphasized comma tokens at different positions by enclosing them in boxes and using various colors for improved clarity [Lines 490-491].\\n\\n**Q1.1:** Why in $D_{train}$ we need to add noisy tokens?\\n\\n**A for Q1.1:** Noisy tokens are added to simulate the dependencies found in natural language corpora, where related tokens often do not appear consecutively and are interrupted by unrelated tokens. This requires Transformers to extract meaningful information from a noisy context. \\n\\n**Q1.2:** Why in $D_{test}$ we do not add noisy tokens?\\n \\n**A for Q1.2:** Prompts in $D_{test}$ are designed without noisy tokens to reflect the typical scenario in which users pose questions to language models, where the given prompts are usually of higher quality and contain less noise. \\n\\nWe include the testing results where testing examples are blurred by noise in the same way as the training data are processed. **These results, shown in Appendix H.2 [Lines 1401-1451], display the same overall pattern as tests without noisy tokens, indicating the robustness of our conclusions.**\"}",
"{\"title\": \"Response for Reviewer hQE3 (PART 2)\", \"comment\": [\"Other comprehension/clarity issues:\", \"It is correct that addition and subtraction are the only possible operations. We clarify this in the revised version [Line 188].\", \"We have illustrated the merge process with a concrete example for clarity [Lines 200-202]. Specifically, for the causal structure in Figure 1 [Lines 162-170], suppose the sampled child chain is [A, B] and the sampled noisy vertices are [Z, H]. We then merge [A, B] and [Z, H] in to a single sequence while preserving the relative order of [A, B]. Potential outcomes include [A, Z, H, B], [A, Z, B, H], and [H, A, B, Z], of which we select [A, Z, B, H]. If the parent vertex A is assigned a value of 100 and the edge [A, B] operation is \\u201c+1\\u201d, B\\u2019s value becomes 101. We randomly sample Z and H values as 3 and 1, resulting in the sequence [A, 100, Z, 3, B, 101, H, 1]. Please refer to Appendix B [Lines 784-798] for the formal merging process description.\", \"We have changed the index of few-shot examples from\\u00a0$f$ to\\u00a0$k$, a more standard notation [Lines 208-214, 222-223, 230-243, 255-263 ...].\", \"The terminology \\\"downside processing\\\" actually refers to \\\"downstream processing\\\", which denotes the final processing step aimed at adapting sequences into natural language-like sentences. We have updated this terminology [Lines 216-217].\", \"We also deleted the redundant construction details and add concrete examples in \\u201cDownstream processing\\u201d part [Lines 220-221].\", \"**Q1:** Why do all Transformer models attain 1.0 values accuracy, even for small Transformers with low vertices accuracy?\", \"**A for Q1:** We first clarify the definitions of vertices accuracy and values accuracy [Lines 262-267]:\", \"Vertices accuracy measures whether the model correctly outputs all vertices included in the label. For instance, with an input like \\u201c\\u2026 C=?:\\u2026 A=100, \\u2026 B=101, \\u2026 C=103 \\\\n C=?: \\u2026 A=102\\u201d, the model is deemed to have accurate vertices if it outputs sentences like \\u201c\\u2026.B=b, \\u2026C=c\\u201d, regardless of the specific values of b and c.\", \"Values accuracy assesses whether the model correctly outputs the values of intermediate vertices, given correct reasoning paths. For values accuracy, if the input is \\u201c\\u2026 A=100, \\u2026 B=\\u201d, the model is considered accurate if it outputs \\u201c101\\u201d as the next token.\", \"Because the values accuracy is tested by prompting models with the reasoning paths that already have accurate vertices, model with low vertices accuracy may still pertain high values accuracy.\"], \"regarding_why_a_one_layer_transformer_achieves_such_performance\": [\"As mentioned in Section 4.3 [Lines 363-365], the limited vertices accuracy is attributed to **the absence of induction heads** in a one-layer Transformer. During testing, models need to use generalized in-context learning ability to retrieve and replicate vertices accurately. As discussed in Section 6.1 [Lines 480-503], induction heads are two heads of the Transformer in different layers that facilitate in-context learning. They are unlikely to exist in one-layer Transformers, thus resulting in poor vertices accuracy.\", \"The High Values Accuracy may be due to the ability of a single-layer attention mechanism to handle parent vertex retrieval and mapping memorization. The precise mechanism of it remains unclear to us and we intend to make further exploration in future research.\"]}",
"{\"summary\": \"The work sets out to investigate whether transformers are capable of generalizing to longer reasoning chains through connecting shorter ones seen in the training stage. The authors introduce \\\"Fragemented at Training, Chained at Testing\\\" learning task to train a randomly initialized 3-layer 3-head GPT2-like transformer. They find that with few-shot chain-of-thought prompting, transformers can perform good compositional reasoning skills by combineing fragments together. The authors further show that the generalization performance highly correlates to model complexity (require multiple-layer attention structure) and high relative knowledge ratio of training data. The paper also discusses the internal working of the model (learn an underlying generalizable program) to interpret the transformer's generalization behaviors and provide theoretical insights on transformer's expressivity on learning a such program.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper investigates whether transformers are capable of generalizing to longer reasoning chains through connecting shorter ones seen in the training stage, which is an interesting and important research question.\\n2. The paper is technically sound: the trained transformers behave compositionally (with few-shot chain-of-thought prompting) and the authors provide insights on its internal workings: induction head and attention assignment, demonstrating that the transformer learn a generalizable program in its internal computing.\\n3. Authors also theoretically prove that Transformers have the expressivity to simulate the generalizble underlying program.\", \"weaknesses\": \"1. Since the experiment setting is a randomly initialized transformer trained on synthetic data, to what extent the paper's conclusion can be extended to real pre-trained language models is questionable.\\n2. the notations used in the paper are quite complicated, making the paper a little bit difficult for readers to follow.\", \"questions\": \"1. In the FTCT learning task (e.g., Figure 1), why in the $D_{train}$, we need to add noisy tokens in the token sequence? Why in the $D_{test}$ we do not add noisy tokens in the prompt?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for the updated review\", \"comment\": \"Thank you for your thoughtful and insightful review of our rebuttals, and for your kind adjustments to our scores. These mean a lot to us!\\n\\nWe appreciate the concerns regarding the sufficiency of synthetic data in demonstrating the positive results of Transformers\\u2019 capability. It\\u2019s noteworthy that while some analogous studies ([1], [2], [3]) focus on demonstrating Transformers' shortcomings with synthetic data, others ([4], [5], [6]) effectively utilize simple yet targeted synthetic data to explore the potential of Transformers and analyze their mechanisms. For instance, Zhou et al. ([4]) investigate factors influencing Transformers' length generalization through experiments on the addition of two integers. The controlled and structured nature of synthetic data indeed enhances the precision and scientific rigor of such investigations, aligning well with our research focus.\\n\\nTo further elucidate the generality of our work, we kindly wish to clarify that empirical works ([7], [8], [9], [10]) have shown the symptom indicating that real LLMs possess the compositional reasoning ability. An example is GPT-4's ability to \\\"write a supporting letter for Electron as a US presidential candidate\\u201d ([10]), illustrating a strange combination of fields unlikely to co-occur during training. However, scientific validation and a detailed mechanistic understanding of this ability have been limited by the intractability of natural data. We believe that our experiments and analysis using controlled synthetic data contribute to this understanding. (This research approach parallels methods used in physics. For instance, to investigate the phenomenon of ball lightning observed in nature, physicists have replicated spherical luminous balls through artificial laboratory experiments ([11], [12]), thereby confirming its potential formation and providing explanations through theoretical models.)\\n\\nThank you once more for engaging in this meaningful discussion and offering valuable opinions! \\n\\n\\n[1] Sanford, Clayton, Daniel J. Hsu, and Matus Telgarsky. \\\"Representational strengths and limitations of transformers.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[2] Abbe, Emmanuel, et al. \\\"Generalization on the unseen, logic reasoning and degree curriculum.\\\"\\u00a0*Journal of Machine Learning Research*\\u00a025.331 (2024): 1-58.\\n\\n[3] Zhang, Honghua, et al. \\\"On the paradox of learning to reason from data.\\\"\\u00a0*arXiv preprint arXiv:2205.11502*\\u00a0(2022).\\n\\n[4] Zhou, Yongchao, et al. \\\"Transformers can achieve length generalization but not robustly.\\\"\\u00a0*arXiv preprint arXiv:2402.09371*\\u00a0(2024).\\n\\n[5] Bietti, Alberto, et al. \\\"Birth of a transformer: A memory viewpoint.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[6] Wang, Zixuan, et al. \\\"Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot.\\\"\\u00a0*arXiv preprint arXiv:2406.06893*\\u00a0(2024).\\n\\n[7] Press, Ofir, et al. \\\"Measuring and narrowing the compositionality gap in language models.\\\"\\u00a0*arXiv preprint arXiv:2210.03350*\\u00a0(2022).\\n\\n[8] Zhou, Denny, et al. \\\"Least-to-most prompting enables complex reasoning in large language models.\\\"\\u00a0*arXiv preprint arXiv:2205.10625*\\u00a0(2022).\\n\\n[9] Khot, Tushar, et al. \\\"Decomposed prompting: A modular approach for solving complex tasks.\\\"\\u00a0*arXiv preprint arXiv:2210.02406*\\u00a0(2022).\\n\\n[10] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\"\\u00a0*arXiv preprint arXiv:2303.12712*\\u00a0(2023).\\n\\n[11] Paiva, Gerson Silva, et al. \\\"Production of ball-lightning-like luminous balls by electrical discharges in silicon.\\\"\\u00a0*Physical review letters*\\u00a098.4 (2007): 048501.\\n\\n[12] Wu, H-C. \\\"Relativistic-microwave theory of ball lightning.\\\"\\u00a0*Scientific reports*\\u00a06.1 (2016): 28263.\"}",
"{\"title\": \"Response for Reviewer EQsG (PART 2)\", \"comment\": \"**W3:** It is questionable that whether the findings on the small Transfomers can be generalized to more complex and sizable architectures.\\n\\n**A for W3:** We evaluated the performance of GPT2-small (12 layers, 12 heads, 117M parameters) and GPT2-large (36 layers, 20 heads, 774M parameters) on the FTCT data, with results detailed in Appendix J [Lines 1502-1611] of the revised version. These models exhibit the same pattern: compositional reasoning ability emerges with increased shot numbers and relative knowledge ratios, suggesting our findings are generalizable to more complex and sizable architectures. However, the performance of these larger models is less stable compared to smaller ones, likely due to overfitting.\\n \\n\\n[1] Press, Ofir, et al. \\\"Measuring and narrowing the compositionality gap in language models.\\\"\\u00a0*arXiv preprint arXiv:2210.03350*\\u00a0(2022).\\n\\n[2] Zhou, Denny, et al. \\\"Least-to-most prompting enables complex reasoning in large language models.\\\"\\u00a0*arXiv preprint arXiv:2205.10625*\\u00a0(2022).\\n\\n[3] Khot, Tushar, et al. \\\"Decomposed prompting: A modular approach for solving complex tasks.\\\"\\u00a0*arXiv preprint arXiv:2210.02406*\\u00a0(2022).\\n\\n[4] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\"\\u00a0*arXiv preprint arXiv:2303.12712*\\u00a0(2023).\\n\\n[5] Chan, Stephanie, et al. \\\"Data distributional properties drive emergent in-context learning in transformers.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a035 (2022): 18878-18891.\\n\\n[6] Allen-Zhu, Zeyuan, and Yuanzhi Li. \\\"Physics of language models: Part 1, context-free grammar.\\\"\\u00a0*arXiv preprint arXiv:2305.13673*\\u00a0(2023).\\n\\n[7] Bietti, Alberto, et al. \\\"Birth of a transformer: A memory viewpoint.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[8] Hupkes, Dieuwke, et al. \\\"Compositionality decomposed: How do neural networks generalise?.\\\"\\u00a0*Journal of Artificial Intelligence Research*\\u00a067 (2020): 757-795.\\n\\n[9] Arora, Sanjeev, and Anirudh Goyal. \\\"A theory for emergence of complex skills in language models.\\\"\\u00a0*arXiv preprint arXiv:2307.15936*\\u00a0(2023).\\n\\n[10] Yu, Dingli, et al. \\\"Skill-Mix: A flexible and expandable family of evaluations for AI models.\\\"\\u00a0*arXiv preprint arXiv:2310.17567*(2023).\\n\\n[11] Xu, Zhuoyan, Zhenmei Shi, and Yingyu Liang. \\\"Do large language models have compositional ability? an investigation into limitations and scalability.\\\"\\u00a0*ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models*. 2024.\\n\\n[12] Treutlein, Johannes, et al. \\\"Connecting the dots: Llms can infer and verbalize latent structure from disparate training data.\\\"\\u00a0*arXiv preprint arXiv:2406.14546*\\u00a0(2024).\\n\\n[13] Ramesh, Rahul, et al. \\\"How capable can a transformer become? a study on synthetic, interpretable tasks.\\\"\\u00a0*arXiv preprint arXiv:2311.12997*\\u00a0(2023).\\n\\n[14] Ye, Tian, et al. \\\"Physics of language models: Part 2.1, grade-school math and the hidden reasoning process.\\\"\\u00a0*arXiv preprint arXiv:2407.20311*\\u00a0(2024).\"}",
"{\"title\": \"Thank you for the updated review\", \"comment\": \"Thank you for your constructive opinions and for increasing your confidence score. We are glad to hear that many of your concerns have been addressed by our responses.\\n\\nThe simplifications and refinements we mentioned in response to **W2** have been incorporated into the revised version of our paper, corresponding to the lines noted in our response. Additionally, our answer to **Q1** can be found in Appendix H.2 [Lines 1401-1451].\\n\\nWe would like to further explain the generality of our work (**W1**). Empirical studies ([1], [2], [3], [4]) indicate that real LLMs exhibit compositional reasoning abilities. For example, GPT-4 can \\\"write a supporting letter for Electron as a US presidential candidate\\u201d ([4]), which logically combines fields not likely to co-occur during training. However, scientific validation and a detailed mechanistic understanding of this capability are limited by the complexity of natural data LLMs are trained on. We believe our experiments and analysis using synthetic data contribute to this understanding. (Same approach has been used in physics research such as the investigation of ball lightning. Physicists have replicated spherical luminous balls through artificial laboratory experiments ([5], [6]), thereby confirming its potential formation and providing explanations through hypothetical models.)\\n\\nNotably, many studies ([7], [8], [9], [10]) have utilized Transformers trained on synthetic data from scratch to explore their potential and analyze their mechanisms. For instance, Zhou et al. ([7]) investigate factors influencing Transformers' length generalization by training them to predict the addition of two integers. The controlled and structured nature of synthetic data, along with the unbiasedness of randomly initialized models, enhances the precision and scientific rigor of such investigations, aligning well with our research objectives.\\n\\nThank you once again for your valuable opinions and insightful discussion\\u2014they mean a great deal to us.\\n\\n[1] Press, Ofir, et al. \\\"Measuring and narrowing the compositionality gap in language models.\\\"\\u00a0*arXiv preprint arXiv:2210.03350*\\u00a0(2022).\\n\\n[2] Zhou, Denny, et al. \\\"Least-to-most prompting enables complex reasoning in large language models.\\\"\\u00a0*arXiv preprint arXiv:2205.10625*\\u00a0(2022).\\n\\n[3] Khot, Tushar, et al. \\\"Decomposed prompting: A modular approach for solving complex tasks.\\\"\\u00a0*arXiv preprint arXiv:2210.02406*\\u00a0(2022).\\n\\n[4] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\"\\u00a0*arXiv preprint arXiv:2303.12712*\\u00a0(2023).\\n\\n[5] Paiva, Gerson Silva, et al. \\\"Production of ball-lightning-like luminous balls by electrical discharges in silicon.\\\"\\u00a0*Physical review letters*\\u00a098.4 (2007): 048501.\\n\\n[6] Wu, H-C. \\\"Relativistic-microwave theory of ball lightning.\\\"\\u00a0*Scientific reports*\\u00a06.1 (2016): 28263.\\n\\n[7] Zhou, Yongchao, et al. \\\"Transformers can achieve length generalization but not robustly.\\\"\\u00a0*arXiv preprint arXiv:2402.09371*\\u00a0(2024).\\n\\n[8] Bietti, Alberto, et al. \\\"Birth of a transformer: A memory viewpoint.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[9] Wang, Zixuan, et al. \\\"Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot.\\\"\\u00a0*arXiv preprint arXiv:2406.06893*\\u00a0(2024).\\n\\n[10] Allen-Zhu, Zeyuan, and Yuanzhi Li. \\\"Physics of language models: Part 1, context-free grammar.\\\"\\u00a0*arXiv preprint arXiv:2305.13673*\\u00a0(2023).\"}",
"{\"comment\": \"I thank the authors for their detailed response to the issues raised by myself and the other reviewers. I also appreciated the effort that the authors took to taxonomize and label their responses, which made it easy to follow.\\n\\nI believe that the authors' responses meaningfully addressed all 3 weaknesses I raised (W1-W3) as well as all questions (Q1-Q3). I especially appreciated the tailored responses to the questions -- these were very helpful and clarifying.\\n\\nAt this point I think the decision around acceptance of this work mainly hinges on generality (W2). I tend to agree with Reviewer EQsG that the burden of proof for this work is higher than for analogous work highlighting shortcomings:\\n> Generally, using a simple and singular synthetic dataset is suitable for highlighting the shortcomings of the Transformer architecture. However, since the paper concludes that Transformers possess this capability, the experiments on this task alone are not sufficient to support such a conclusion.\\n\\nIn response, the authors argue that performing more comprehensive experiments would be \\\"valuable\\\" but \\\"challenging amidst current research advancements.\\\" I sympathize with the view that asking for more systematic benchmarking of LLMs is one of the easiest things to request from a reviewer standpoint, and one of the most burdensome things to implement from an author standpoint. Moreover, I appreciate that the authors (in response to the reviews) added new benchmarks with GPT-2 small and large. This is sufficient for me.\\n\\nIn response to the authors rebuttals, I have updated the following scores in my review:\\n- **Presentation**: 1 \\u2192 2\\n- **Overall Rating**: 5 \\u2192 6\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the response. Many of my previous concerns have been properly addressed (except for W1). As for the further explanation for W2 and Q1, please carefully append them to the revised version of the paper. I would like to maintain my score and lean towards accepting the paper (I raise my confidence score from 3 to 4).\"}",
"{\"title\": \"Thank you for the updated review\", \"comment\": \"Thank you once again for your detailed review and inspiring opinions! They are valuable to us.\"}",
"{\"metareview\": \"All reviewers made strong, positive comments about the submission, with common threads commenting on it being (i) interesting and important research (ii) technical soundness and particularly creative and interesting analysis (iii) the well-conceived nature of the FTCT task. While there were some open questions of how findings transfer to larger models and/or pretrained large-scale transformers, the authors did convincingly argued for the randomly initialised synthetic data as a valid use case while also scaling up to at least the hundreds of millions of parameters scale (GPT-2) level to check for empirical differences in model size.\\n\\nWhile I would have liked to see a strong champion for this submission under the reviewers for the submission, the unanimity in considering the submission above the acceptance threshold, together with the lack of deeper criticism, persuades me that this submission is suitable for publication at ICLR. I would like to ask the authors to take any remaining feedback into account for the camera-ready version.\", \"additional_comments_on_reviewer_discussion\": \"We saw a very engaging and lively discussion between reviewers and authors, with the authors, in particular, giving very well-argued and strong rebuttals to some of the authors' criticisms. The author's arguments around the valid goals and scope of this manuscript (e.g. explicitly not wanting to test large pre-trained language models and instead focusing on new, randomly initialised models) positively influenced my own weighting of the pros and cons of the submission.\"}",
"{\"title\": \"Thank you for the updated review\", \"comment\": \"Thank you once again for your careful review and constructive questions, which are valuable to us!\"}",
"{\"summary\": \"The paper investigates how well transformers are able to perform compositional reasoning tasks. To that end, the paper introduces a new dataset methodology, namely the Fragmented Training, Chained at Testing (FTCT) that simulates how models would be presented with training data in practice (with incomplete fragments of reasoning paths with noise + context) and how well the model is able to piece together the full reasoning chain in test-time. Using this methodology, the paper runs insightful experiments that ablate different lengths of partial reasoning chains during training, different transformers and neural architectures, and number of few shot CoT examples. Through these experiments, the authors find that few shot CoT plays an important role for compositional reasoning, the impact of increasing relative knowledge ratio, and the increasing expressibility of adding layers and heads in the transformers architecture. Lastly, the paper presented some empirical evidence that you need a certain complexity of the transformers architecture to simulate the optimal program for the FTCT task in training and testing.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper presented a very intriguing and creative approach to testing the ability for models to learn compositional reasoning ability\", \"There are some really interesting results, specifically the exact complexity (and the increased expressability) needed for the transformer architecture to optimally solve the FTCT task\", \"The insights regarding the few shot CoT results are of significance and spark further research in this area\", \"The empirical findings of how the transformers performs this task is enlightening and should spark some interest for further research in this area\"], \"weaknesses\": [\"The clarity of this paper is lacking, especially in the notation and writing. For instance, in Figure 1, there is a seeming typo in some of the values that contradicts the setup of the dataset. Separately, some concrete examples of the data (including noise + context tokens) of the FTCT dataset would really improve the readers understanding (it took me multiple re-read to get the gist of the methodology)\", \"The paper's definition of compositional reasoning should be explicitly written out in the paper. The only real definition of this is in the abstract where it is stated that \\\"For example, if someone learns ( B = f(A) ) from one source and ( C = g(B) ) from another, they can deduce ( C = g(f(A)) ) effortlessly, even without encountering ( AC ) or ( ABC ) together, showcasing their compositional generalization ability.\\\"\", \"With this FTCT methodology, it seems clear that the model is learning some ability to connect sequential reasoning chains together (e.g. during training the model might just as AB and BC and correctly chain ABC), but the approach does not test if the model can correctly reason about AC in test-time, which is an aspect of compositional reasoning (as mentioned in the abstract)\"], \"questions\": [\"Although it is clear that the model is learning some ability to connect reasoning chains together (e.g. during training the model might just as AB and BC and correctly chain ABC), will the model be able to correctly chain together the values of AC? This could make for an interesting experiment where we could have some skip links in the test data and check for values accuracy\", \"Checking my understanding, is there a typo in Figure 1, where B=106 and B=103, should it be C=108 and C=105, respectively?\", \"Are there more than one set of the causal chains? The set equation in line 155 seems to suggest there is only one sequence of length n.\", \"Why are the noise vertices inserted in a predictable manner?\", \"I am curious about this 0.3 relative knowledge ratio threshold where it is reported that compositional reasoning emerges. Could it be that 0.3 is when the probability that there is at-least one occurrence for every (v_i, v_{i+1}) in the train set reaches close to 1?\", \"Why is there a drop in performance in Figure 2 (right) and relative knowledge of 0.25?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": [\"**Aims:** This paper seeks to understand at a mechanistic level how Transformers are able to perform compositional reasoning in a few-shot chain-of-thought setting.\", \"**Methods:** A synthetic dataset is generated consisting of chains of nodes and edges derived from causal graphs. At training time, spurious nodes are inserted randomly into the chains; at testing time, few-shot prompts consisting of intact chains (no spurious nodes) are provided to the model. Models are tested on their ability to reconstruct full causal chains from fragmented chains learned in training, with evaluation based on accuracy in predicting both the correct vertex order and values in the chain.\", \"**Results:**\", \"Zero-shot versus few-shot prompting is compared, with findings showing that few-shot CoT prompting significantly enhances performance in compositional reasoning tasks, particularly in forming the correct vertex order.\", \"A space of small, GPT-2-style models ranging from 42M-54M parameters are trained on the FTCT dataset. Results show that multi-layer, multi-head Transformers (minimum 2 layers and 2 heads) perform notably better, while single-layer/single-head models and MLPs perform poorly.\", \"The impact of training data\\u2019s relative knowledge ratio (ratio of child chain length to complete chain length) is studied, with a critical threshold (ratio \\u2265 0.3) identified where compositional reasoning reliably emerges.\", \"Mechanisms underlying the model's success, such as induction heads for in-context learning and attention patterns facilitating parent-child relationships, are analyzed through linear probing, revealing specific mechanisms by which the model achieves compositional reasoning.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"**Motivation:** The key research questions of the paper are clearly delineated in the introduction:\\n\\n1) When are Transformers able to perform compositional reasoning by connecting fragmented knowledge in training data?\\n2) How do different training factors impact the emergence of this ability?\\n3) What internal mechanisms enable Transformers to develop this ability?\\n\\nThese questions are broadly relevant to current research and the paper is structured in a way that consistently centers these 3 questions throughout.\\n\\n**Mechanistic interpretability analysis:** I especially enjoyed the approach to question (3). Broadly speaking, the authors approach this question by first demonstrating that there exists a program that solves the task (Sec. 5.1) and that this program can be approximated by a 2-layer Transformer (Sec. 5.2). Then, through linear probing experiments (Sec. 6), they give an empirical argument that the Transformers trained on FTCT have learn to implement this program. I am not an expert on probing so I can\\u2019t speak to the soundness of the methods, but I found the combination of Sec. 5-6 to be an elegant argument from a mechanistic interpretability standpoint.\", \"weaknesses\": [\"**Novelty:** The paper claims to \\u201cintroduce a learning task\\u201d (FTCT) based on causal graphs, and yet the design of this task is nearly identical to the setup in Prystawski et al. (2024). Given that the main distinction between FTCT and the prior work is the introduction of spurious nodes (line 105-106), I would expect to see this prior work\\u2014which actually *did* introduce a novel learning task\\u2014given more prominent attribution.\", \"(Currently this work is referenced in Line 103\\u2014\\u201dThe empirical findings of our work align with the observation in (Prystawski et al., 2024)\\u2026\\u201d The wording of this reference obfuscates the underlying causal structure that this prior work likely played in informing the current paper.)\", \"**Generality:** The key findings of this paper are framed in very broad terms:\", \"> \\u201cThe emergence of compositional reasoning is highly influenced by the data\\u2019s relative knowledge ratio and model complexity. Specifically, a relative knowledge ratio of at least 0.3 and a Transformer architecture with at least two layers and two heads are critical for achieving this ability.\\u201d (Lines 520-523)\", \">\", \"However, these conclusions are all drawn relative to one synthetic dataset with a highly-specific structure; it is unclear to what extent the empirical conclusions (e.g., compositional reasoning in transformers requires a relative knowledge ratio \\u2265 0.3) generalize beyond the FTCT task. To make a convincing argument that these results have meaning beyond this one benchmark, this analysis ought to be replicated on more naturalistic reasoning benchmarks where few-shot CoT prompting is commonly used.\", \"**Clarity:** The description of the FTCT dataset/task design (Sec. 3) fails to convey a clear description of the experiment setup and requires too much work of the reader. All aspects of prompt construction are described in excruciating formal detail, making it hard to separate design choices that are key to the experiment from implementation details. Overall, the formalism in this section is a barrier to understanding what\\u2019s going on at a more basic level.\", \"Fig. 1 lacks basic signposting needed to convey what is going on.\", \"First off, there is no caption. This is a major omission as the figure is definitely not self-explanatory.\", \"The blue highlights draw the reader\\u2019s attention to spurious features of the task (noise nodes) instead of the actual purpose of the task (predicting values of causal nodes).\", \"Other comprehension/clarity issues in Sec. 3:\", \"\\u201cWe assume op(e) represents operations like (+a) or (\\u2212b)\\u201d Does this mean addition/subtraction are the *only* possible operations?\", \"I don\\u2019t understand how the merge operation works from the description.\", \"Some unconventional choices of notation, such as using $f$ as an index over few-shot examples.\", \"What is \\u201cdownside processing\\u201d - do you mean \\u201cdownstream\\u201d?\"], \"questions\": \"1. Table 1 shows that all Transformer models attain 1.0 Values accuracy, even for small models that get very low Vertices accuracy. Can you account for this discrepancy?\\n2. An unintuitive pattern in the results (e.g., Fig. 2 and Table 3) is that accuracy *decreases* with the number of few-shot prompts $>1$. This results stands in contrast to a standard ICL setting, where inclusion of more examples typically improves performance. It is stated that this is \\u201cpossibly due to increased dissimilarity between training and testing data with more CoT examples\\u201d (Line 277-278). Why does including more CoT examples causes the test data to be OOD? If this is the case, this seems like an important confound affecting this experiment setup that may not be adequately accounted for.\\n3. It is interesting to contrast the results from Sec. 5-6 with Zhang et al., 2022 (\\u201cOn the Paradox of Learning to Reason from Data\\u201d), who apply a similar methodology but find that gradient descent fails to discover the correct $\\\\theta^*$ for a logical induction task with very similar structure. Is there a reason why here the training succeeds at uncovering the underlying program, whereas in previous work it does not? More generally, it would be nice to see reference to this paper in the discussion.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response for Reviewer hQE3 (PART 3)\", \"comment\": \"**Q2:** Why does accuracy decrease with more than one few-shot prompt?\\n\\n**A for Q2:** While it is typically expected that performance improves with more few-shot examples, it is not rare for performance to fluctuate or even decrease as additional examples are provided (see Figure 4b in [9] and Figures 7, 10 in [10]). Notably, Figure 2 in Agarwal et al. [12] demonstrates that the optimal number of shots for peak performance is often lower than the maximum number they can handle, indicating that performance does not always increase monotonically with the number of shots.\\n\\nWe argue that performance decreases when shots number > 1 because more CoT examples increase the dissimilarity between training and testing data. To explain this, we start by analyzing the difference between one-shot examples from the testing and training data. During testing, one-shot examples typically consist of the longest chains of length $N$. In contrast, one-shot examples during training comprise child chains of length $M < N$. As the number of shots increases, each shot introduces extra instances of these $N-M$ missing vertices, exacerbating the disparity between training and testing prompts. This growing difference makes it harder for the model to recognize testing data patterns, thus reducing performance. In our setup, a single shot during testing already provides ample information about the vertex order needed for generating correct answers. For a $k$-shot testing example, the additional $k-1$ shots do not add valuable information and only exacerbate the divergence between training and testing data. Consequently, we observe that testing performance peaks at 1-shot and diminishes thereafter.\\n\\nWe test the performance of Transformers in reasoning causal chains with varying test lengths in Appendix L [Lines 1613-1673]. The results (Figure 10 [Lines 1653-1673]) show that for tasks where test lengths are close to child chain lengths models are trained on, few-shot performance remains stable without decrease. As the gap between child chain and test lengths widens, the few-shot performance decrease becomes evident. Thus, we conclude that **when differences between training and testing data are limited,\\u00a0the expected pattern of in-context learning appears, where performance improves with more shots and does not decline after reaching its peak. As the gap between testing causal chains and training child chains widens, the performance decrease after one shot becomes evident, indicating the influence brought by OOD tasks.**\\n\\nThis non-monotonic pattern does not affect our conclusions and contributions, as the performance at the optimal number of shots already sufficiently reflects the compositional reasoning ability.\\n\\n**Q3:** In contrast to our successful results, Zhang et al. [11] find that Transformers fail to uncover the underlying program to generalize to different testing data. What explains our different outcomes?\\n\\n**A for Q3:** Possible reasons of this difference are as follows.\\n\\n- CoT prompting: Zhang et al. utilize BERT models to directly generate answers for complex reasoning tasks. In contrast, we employ few-shot CoT prompting to facilitate step-by-step reasoning in GPT2-like Transformers, which has been shown to provably enhance model\\u2019s expressivity and generalization ability ([13], [14]). Without the help of few-shot CoT, Transformers also struggle in FTCT (poor zero-shot performance in Figure 2 (left) [Lines 289-308] )\\n- Statistical Features:\\u00a0Our FTCT dataset contains fewer \\\"statistical features\\\" compared to those in Zhang et al.'s datasets. As per Zhang et al., a statistical feature is a specific statistic of an example that strongly correlates with its label. The presence of numerous statistical features can lead models to learn these correlations rather than the true underlying program. Our dataset\\u2019s testing performance is segregated into vertices accuracy and values accuracy. Each vertex token relates solely to the same token in preceding few-shot examples, without obvious statistical features that correlate highly with it. It is the same for the values, which is only related with its corresponding vertices and its parent.\\n\\nIn the revised version, we discuss this work in the related works section [Lines 139-141].\\n\\nWe would like to emphasize that Transformers can not solve every FTCT task successfully. For those tasks with low relative knowledge ratio, Transformers also fail to generalize (Figure 2 (right) [Lines 289-308]). We observe a phase transition of the generalization ability as the relative knowledge ratio increases, indicating the key role played by data structure in eliciting models\\u2019 generalization ability.\"}"
]
} |
1X85iw7tqY | CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning | [
"Qingqing Cao",
"Mahyar Najibi",
"Sachin Mehta"
] | Pretraining strong vision or multimodal foundation models like CLIP relies on large-scale datasets (e.g., image-text pairs) that may be noisy, potentially misaligned, and have long-tail distributions. Previous work has shown promising results in augmenting datasets by generating synthetic samples. However, they only support domain-specific ad hoc use cases (like for image or text alone) and are limited in data diversity due to a lack of fine-grained control over the synthesis process.
We design a controllable image-text synthesis pipeline called CtrlSynth to enable data-efficient multimodal learning and improve vision and multimodal models in various use cases. The key idea is to decompose the visual semantics of an image into basic elements, apply user-specified control policies (e.g. remove, add, replace operations), and recompose them to synthesize images or texts. The decompose and recompose feature in CtrlSynth allows users to control data synthesis in a fine-grained manner by defining customized control policies to manipulate the basic elements. CtrlSynth leverages the capabilities of pretrained foundation models such as large language models (LLMs) or diffusion models (DMs) to reason and recompose basic elements such that synthetic samples are natural and composed in diverse ways. CtrlSynth pipeline is training-free and has a modular design, making it easy to support different pretrained models.
CtrlSynth pipeline is also closed-loop, meaning it can synthesize text data based on the image or vice versa. Our evaluation shows that CtrlSynth samples substantially improve zero-shot classification, image-text retrieval, and compositional reasoning performance of CLIP models. We will publicly release the code and pipeline for future research. | [
"clip",
"synthetic data",
"multimodal learning",
"longtail"
] | Reject | https://openreview.net/pdf?id=1X85iw7tqY | https://openreview.net/forum?id=1X85iw7tqY | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"owZubMEgCN",
"m6BIwUadf7",
"eNf8DczsxL",
"eGCVkvGHsi",
"WvxOvNEO7k",
"UEo4DYFu7R",
"PbPLNoy3eZ",
"MchevACPDw",
"LV3EJDiHQt",
"KHxCbnXskn",
"EofyuSNvfR",
"EmO51xNSl8",
"DZSCvY6olX",
"9ExYSHU0du",
"6K1g6uKgjs",
"5CO6Rab6K8",
"0mR67ufeh0"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"decision"
],
"note_created": [
1732310767925,
1730618014896,
1733250991293,
1730292086662,
1733199704823,
1729432317873,
1731998047823,
1731998728372,
1731998013856,
1730744741227,
1732503091352,
1731998700255,
1734935615107,
1732514945583,
1731997545647,
1731998552329,
1737524115045
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Reviewer_yHqv"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Reviewer_nSoW"
],
[
"ICLR.cc/2025/Conference/Submission11271/Reviewer_L3Nm"
],
[
"ICLR.cc/2025/Conference/Submission11271/Reviewer_JT2Z"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Reviewer_L3Nm"
],
[
"ICLR.cc/2025/Conference/Submission11271/Reviewer_nSoW"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Area_Chair_P6aN"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11271/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Reviewers,\\n\\nThank you once again for recognizing the significance of our work, the clarity of our ideas, and the effectiveness of our results. We truly value your insightful comments and have carefully considered them in our responses.\\n\\nWe kindly wanted to check if our replies have satisfactorily addressed your concerns or if there are any additional points you'd like us to address. We hope that our efforts will encourage you to reconsider your review score, but we are more than happy to engage further if you have additional feedback.\\n\\nThank you for your time and thoughtful input.\"}",
"{\"summary\": \"The paper proposes CtrlSynth, a controllable image-text synthesis pipeline for data-efficient multimodal training. Addressing limitations in existing large-scale datasets that are often noisy and misaligned, CtrlSynth enables fine-grained control by decomposing images into basic elements and applying user-specified modifications to synthesize new data. This training-free and flexible pipeline can work with different models and supports closed-loop synthesis (image to text and vice versa). The proposed method also boosts the performance of multimodal model training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper focuses on noise and misalignment in the large-scale image-text datasets, which is a critical challenge in multimodal learning.\", \"The paper introduces an innovative approach that emphasizes fine-grained control, utilizing generative models to decompose and refine images and texts at a detailed level. Notably, it is training-free and suited for integration with different pre-trained generative models.\", \"The experiments presented in the paper show that the proposed method improves downstream performances of multimodal models.\"], \"weaknesses\": \"The paper, while contributing valuable ideas, has several notable weaknesses that are significant and need to be addressed.\\n\\n### Methodological Weaknesses\\n- The proposed pipeline shares significant similarities with GenArtist[1] on image editing. The paper does not clearly demonstrate the differences between this work and GenArtist. It is important for the authors to specify these distinctions and highlight the novelty of their approach. Additionally, a thorough comparison should be incorporated into the experimental section to strengthen the evaluation.\\n- While fine-grained control is presented as the main contribution, represented by the text and image controllers in the pipeline, the design is inadequate and lacks clarity. The design of the pipeline does not effectively demonstrate how the editing condition is provided to the generative model in a fine-grained manner. The text controller relies solely on prompt concatenation, making the mapping between visual tags and policies unclear and limiting precise control. Additionally, the paper does not address how to maintain image consistency after editing, which is essential for practical use. These shortcomings contribute to potential inconsistencies and an insufficient explanation of how fine-grained control is maintained. The image controller exists the same problem.\\n\\n### Experimental Limitations\\n- The datasets used (CC3M and CC12M) are relatively small, with no experiments conducted on larger datasets such as LAION-400M or LAION-5B.\\n- The paper only tests a limited range of multimodal model structures, lacking experiments on models like BLIP and CLIP of different ViT models.\\n- The study does not address data-efficiency validation. Existing data-efficiency-focused works, such as SemDeDup[2], Filter-&-Align[3], and Sieve[4], refine or filter datasets for better performance. The paper should include comparisons with these approaches in terms of model performance and the amount of training data.\\n\\n---\\nReference\\n\\n[1] Zhenyu Wang, Aoxue Li, Zhenguo Li, and Xihui Liu. GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing. arXiv.2407.05600.\\n\\n[2] Amro Abbas, Kushal Tirumala, Daniel Simig, Surya Ganguli and Ari S. Morcos. SemDeDup: Data-efficient learning at web-scale through semantic deduplication. arXiv.2303.09540.\\n\\n[3] Lei Zhang, Fangxun Shu, Tianyang Liu, Sucheng Ren, Hao Jiang, and Cihang Xie. Filter & Align: Leveraging Human Knowledge to Curate Image-Text Data. arXiv.2312.06726.\\n\\n[4] Anas Mahmoud, Mostafa Elhoushi, Amro Abbas, Yu Yang, Newsha Ardalani, Hugh Leather, and Ari Morcos. Sieve: Multimodal Dataset Pruning Using Image Captioning Models. arXiv.2310.02110.\", \"questions\": \"Refer to the Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the insightful discussion. We would like to clarify that the main contribution of our work is not solely to achieve the best CLIP performance on synthetic or real datasets. Rather, our main focus is on introducing a closed-loop image-text synthesis system that has broad applicability. As a demonstration of its utility, we show that it enhances CLIP performance using the same number of samples as the original CLIP training sets.\\n\\nAdditionally, we acknowledge that questions remain regarding how to retrieve similar samples from larger training sets. This includes challenges such as building a general index for semantic retrieval, determining which samples to retrieve, and related areas that we view as important directions for future research.\\n\\nMost importantly, we emphasize that our synthetic samples address a critical gap in longtail tasks, where large training datasets are unavailable and retrieval-based approaches are not feasible.\\n\\nWe hope this provides a clearer perspective on our work, and we appreciate your continued engagement and feedback.\"}",
"{\"summary\": \"This paper introduces a multimodal data synthesis pipeline called CtrlSynth. Specifically, CtrlSynth includes a vision tagging model to extract key objects, attributes, and relations from an image, which can then optionally be combined with the original text for a language model to generate new image descriptions. Finally, the newly generated image caption is input into a text-to-image model to generate an image. The authors have demonstrated the effectiveness of their pipeline by comparing it with CLIP pretraining data. Overall, the enhanced dataset appears to be superior to the original one.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea is clear and effective: by combining multiple expert models, we can obtain fine-grained image tags, captions, and synthetic images, which together help to create a high-quality synthetic dataset.\\n\\n2. The modularized pipeline is flexible, as each model can be replaced without affecting the performance of the other components.\\n\\n3. Experiments are comprehensive. Compared to the baseline CLIP, the improvements from CtrlSynth are evident.\", \"weaknesses\": \"1. Practical concerns: By using several models, such as the vision tagging model, LLM, and diffusion model, the proposed method might not be efficient for scaling up to larger datasets, particularly considering the time cost associated with image synthesis.\\n\\n2. The assumption behind CtrlSynth is based on a fixed number of data samples, where the method helps a model achieve better performance than training on the original dataset. However, given the recent trends in LLM and multimodal LLM research, where pretraining data continues to scale up, the proposed method may not be scalable for very large datasets. While this is a challenge, under the current setting in the paper, CtrlSynth is indeed effective.\", \"questions\": \"Can the authors provide details on the overall efficiency of the proposed pipeline? For example, how long does it take to generate 1 million images along with their captions? It would also be good to know the time cost at each component, e.g. vision tagging, caption generation, image generation. A more complete picture of the efficiency in the pipeline would better help to assess the value of this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the response. After reading the rebuttal, it seems to me that most of their performance gain comes from having more diverse data which is generated by their pipeline. However, the augemntation is on a smaller dataset. Since the gains don't come from generating any specific aspect which is hard to get in real data (like specific compositions that are hard to find in real images etc), a concern still remains whether simply using more real data is just easier than generating. Especially because their generation also seems to take a long time. For instance, if we look at some of the CLIP performance numbers when simply trained on a bigger dataset (eg, here: https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv), the performance is higher than their synthetic data augmented numbers. It would be nice if the authors could show the value of their synthetic data added vs just retrieving a similar number of examples more from the real data already available in bigger training sets.\\n\\nHowever, that said, it is still interesting that synthetic generation from simply variations in captions can boost performance and hence, I keep my score. But I urge the authors to add such an experiment or acknowledge these in the limitations.\"}",
"{\"summary\": \"This paper introduces CtrlSynth, a controllable pipeline for generating synthetic image-text data to improve multimodal models. By allowing fine-grained control over data synthesis, CtrlSynth decomposes and recomposes visual semantics using pretrained models, enhancing diversity and alignment of generated samples.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Introduces a new controllable synthesis pipeline (CtrlSynth) that allows fine-grained data manipulation, enabling user-defined policies for image-text synthesis.\", \"Achieving significant performance improvements across diverse tasks such as zero-shot classification, retrieval, and long-tail recognition is inspiring.\", \"Clearly explains the methodology with diagrams and examples, easy to understand the synthesis process and its components.\"], \"weaknesses\": \"1. In the ablation experiments, it was observed that the performance improvement brought by CtrlSynth-img alone is minimal. Would it be possible to completely remove the generation of synthetic images and focus resources on improving the diversity and quality of synthetic text? Would this lead to consistent performance improvements across all tasks?\\n\\n2. The paper mentions that CtrlSynth uses a self-filtering mechanism to improve the quality of synthetic data, but it lacks detailed explanations about the implementation, such as how the alignment threshold for visual tags is selected.\\n\\n3. The paper does not explain in depth how CtrlSynth fundamentally differs from other caption augmentation methods like VeCLIP and LaCLIP. It is necessary to provide a clearer comparison, clarifying whether the increased diversity brought by the decomposition of visual tags and user control strategies is more important, or whether it is the generation of more fine-grained semantic captions that matters.\\n\\n4. The experiments may be limited to a few selected models (e.g., Mistral-Nemo and Qwen2-7B). Would using larger LLMs lead to better results? \\n\\n5. A drawback of this method is that the data generation pipeline involves multiple different models and is not end-to-end in training, requiring substantial resources and time for building the synthetic data in the early stages.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \">The datasets used (CC3M and CC12M) are relatively small, with no experiments conducted on larger datasets such as LAION-400M or LAION-5B.\\n\\n**Response 3**: The main goal of CtrlSynth is to demonstrate the effectiveness and controllability of diverse text-image synthesis across different settings, including image-text datasets like CC3M and CC12M, as well as vision longtail datasets. We acknowledge that our dataset scale is relatively small, but scaling the synthesis pipeline to larger datasets would require substantial computational resources, particularly for image generation.\\n\\nMoreover, we are unable to utilize the LAION datasets due to the presence of sensitive and NSFW content, as highlighted in recent research (\\\"Into the LAION\\u2019s Den: Investigating Hate in Multimodal Datasets,\\\" https://arxiv.org/abs/2311.03449). This poses legal challenges that prevent us from using these datasets. Given our computing limitations and the short rebuttal timeframe, we cannot provide additional experimental results at this time. However, we plan to include experiments with DataComp1B for caption synthesis in the next version of our work.\\n\\n>The paper only tests a limited range of multimodal model structures, lacking experiments on models like BLIP and CLIP of different ViT models.\\n\\n**Response 4**: Due to time and computational constraints, we will include experiments with CLIP using different ViT backbones (ViT-L and ViT-H) in the next version. For vision-language models (VLMs) such as BLIP and LLaVA, the experimental setup is more complex, as it requires two distinct stages of training: pretraining with image-text pairs and finetuning with instruction-tuning data (e.g., visual QA pairs). In future work, we will explore the impact of augmenting the pretraining image-text dataset with synthetic pairs generated by our CtrlSynth pipeline to better understand the benefits of our approach.\\n\\n>The study does not address data-efficiency validation. Existing data-efficiency-focused works, such as SemDeDup[2], Filter-&-Align[3], and Sieve[4], refine or filter datasets for better performance. The paper should include comparisons with these approaches in terms of model performance and the amount of training data.\\n\\n**Response 5**: We appreciate the reviewer for highlighting related work on data efficiency. Our primary contribution is to generate diverse synthetic text-image data in a controlled manner, with data efficiency being an additional benefit of our approach. As illustrated in Figure 5 in Section 4.4, our synthetic samples demonstrate significant efficiency gains.\\n\\nWhile our method is orthogonal to prior work on data efficiency, we have added a discussion in Section 4.4 of the revised paper to acknowledge and contextualize this relationship. Additionally, our synthetic samples notably improve performance on long-tail tasks, where conventional data filtering methods do not apply.\"}",
"{\"comment\": \">The experiments may be limited to a few selected models (e.g., Mistral-Nemo and Qwen2-7B). Would using larger LLMs lead to better results?\\n\\n**Response 4**:\\nMistral-Nemo, with 12 billion parameters, is a larger model compared to Qwen2-7B, which has 7 billion parameters. As shown in Table 8, we observe improvements when using the larger Mistral-Nemo model. We leave the exploration of scaling to even larger models for future work.\\n\\n\\n>A drawback of this method is that the data generation pipeline involves multiple different models and is not end-to-end in training, requiring substantial resources and time for building the synthetic data in the early stages.\\n\\n**Response 5**:\\nWe believe that the modularity of our pipeline, which supports different models, is a strength rather than a drawback. It does not rely on model-specific configurations, making it flexible and adaptable. Additionally, the pipeline is currently training-free, meaning it does not require end-to-end training. Our focus is on leveraging the capabilities of pretrained models in a plug-and-play manner to facilitate data generation, rather than fine-tuning or adapting the models for specific use cases.\"}",
"{\"comment\": \">The proposed pipeline shares significant similarities with GenArtist[1] on image editing. The paper does not clearly demonstrate the differences between this work and GenArtist. It is important for the authors to specify these distinctions and highlight the novelty of their approach. Additionally, a thorough comparison should be incorporated into the experimental section to strengthen the evaluation.\\n\\n**Response 1**: We appreciate the reviewer\\u2019s mention of the GenArtist paper. However, we would like to point out that GenArtist is a concurrent work, classified as contemporaneous according to the ICLR policy (refer to the FAQ section in the ICLR Reviewer Guide, https://iclr.cc/Conferences/2025/ReviewerGuide), and authors are not required to compare their work to the paper.\\n\\nThat said, we have clarified the key distinctions between our work and GenArtist. Specifically: (1) Our CtrlSynth pipeline offers support not only for controllable image synthesis but also for diverse text and image synthesis paths. In Section 3.2, we detail four unique synthesis paths that our method facilitates. (2) While GenArtist focuses on enabling more fine-grained and precise control over image generation, primarily benchmarking against methods like MagicBrush and InstructPix2Pix, our pipeline could benefit from such advancements. Nonetheless, it remains an open research question how to automate the generation of image editing instructions for each dataset sample, given that approaches like MagicBrush and InstructPix2Pix require manually crafted per-sample instructions and require additional training of text-to-image models to support such precise control.\\n\\n>While fine-grained control is presented as the main contribution, represented by the text and image controllers in the pipeline, the design is inadequate and lacks clarity. The design of the pipeline does not effectively demonstrate how the editing condition is provided to the generative model in a fine-grained manner. The text controller relies solely on prompt concatenation, making the mapping between visual tags and policies unclear and limiting precise control. Additionally, the paper does not address how to maintain image consistency after editing, which is essential for practical use. These shortcomings contribute to potential inconsistencies and an insufficient explanation of how fine-grained control is maintained. The image controller exists the same problem.\\n\\n**Response 2**: \\nWe have detailed our design approach in Section 3.1, where we outline the visual tags that enable users to exercise fine-grained control through editing. In line 201 of the revised version, we introduce three control policies for the text controller and two control policies for the image controller. It\\u2019s important to clarify that we do not modify the underlying models (LLMs or text-to-image models). Instead, our editing is facilitated solely through input textual instructions.\\nWhile methods like InstructPix2Pix could be explored to maintain image consistency post-editing, applying such techniques at scale remains a significant challenge. Our primary objective is to augment existing datasets by synthesizing diverse samples. Interestingly, we find value in synthetic images that are not perfectly aligned, as they offer beneficial semantic augmentation to the original datasets. In fact, enforcing complete alignment in training can be detrimental, as it restricts the augmentation potential and hampers overall performance.\"}",
"{\"summary\": \"The authors propose a controllable image-text generation pipeline that can augment data to improve CLIPs image retrieval, classification, and compositional performance. Specifically, they leverage strong vision models to tag images with objects and attributes, use the knowledge in language models to create new variations of the captions, and use diffusion models to generate images based on the new captions as prompts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The pipeline can be thought of as a way to distill knowledge from the language models and stable diffusion models to augment the dataset of CLIP. This is an interesting way to inject new information in synthetic data.\", \"The results are good, demonstrating improvements over CLIP while maintaining the amount of data it sees since the fix the number of iterations and just change the proportions of real vs their synthetic data.\"], \"weaknesses\": [\"They say that the language model takes an instruction on how to generate a caption given the visual tags. They show some examples in Appendix A1. The instructions don't mention any editing, it mostly just says to describe the image better. In that case, do the gains come from some hallucination in the LLM caption that makes varied images?\", \"Have the authors tried any other variation of editing instructions? Is there any analysis on the kinds of image editing prompted by the text that improve performance more? Are there specific prompts that serve as better negatives when tuning the CLIP contrastive loss?\", \"There are other works that edit images based on text instructions like instruct pic to pic, magic brush etc. It might have been nice to see to see if editing certain things in images based on the LLM prompts is better than just using SD to generate since SD can often lack accuracy in generating the correct attribute object relation compositions.\", \"Nit: There are several works that either generate synthetic images based on the dataset they want to target (https://arxiv.org/pdf/2406.05184), or for cross domain retrieval (https://arxiv.org/pdf/2401.00420). A discussion for comparison could be nice.\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comments from Reviewer\", \"comment\": \"Thanks for the authors'rebuttal. I have carefully read the comments from other reviewers. For my own comments, I'm still concerned about the practical usage of the proposed pipeline. For example, it requires more than 3 weeks for image generation based on 8 GPUs and 1M images.\\n\\nBesides, I'm a bit curious about Q1 from Reviewer JT2Z, where the reviewer mentioned the performance gain by simply focusing on the the diversity and quality of synthetic text. Given the fact that this method is not efficient if generating images, I think the text-only performance would be more interesting as it can be much more efficient for the practical usage. However, it seems that the authors do not validate this potential.\"}",
"{\"comment\": \">In the ablation experiments, it was observed that the performance improvement brought by CtrlSynth-img alone is minimal. Would it be possible to completely remove the generation of synthetic images and focus resources on improving the diversity and quality of synthetic text? Would this lead to consistent performance improvements across all tasks?\\n\\n**Response 1**:\\nThank you for raising this point. The inclusion of image synthesis in our approach is to enable a broader range of diverse text-image synthesis paths and to complete the closed-loop pipeline. While CtrlSynth-img alone provides only minimal improvements in CLIP performance, it has significant potential to improve results in cases where the text is noisy or when the quality of synthetic text degrades.\\nGiven the modular nature of CtrlSynth, we can easily remove or replace either component depending on the use case. For CLIP specifically, we could remove CtrlSynth-img and focus on improving the synthetic text. However, for other tasks, such as long-tail vision datasets, we still need synthetic images. \\n\\n>The paper mentions that CtrlSynth uses a self-filtering mechanism to improve the quality of synthetic data, but it lacks detailed explanations about the implementation, such as how the alignment threshold for visual tags is selected.\\n\\n**Response 2**:\\nIn Section 4.4 and Figure 6(a), we describe the methodology and results related to different thresholds for self-filtering. The threshold is selected based on the zero-shot accuracy of the trained CLIP models, evaluated on the ImageNet validation set. To provide further clarity, we have added additional explanations for the self-filtering process in Appendix A.5.\\n\\n>The paper does not explain in depth how CtrlSynth fundamentally differs from other caption augmentation methods like VeCLIP and LaCLIP. It is necessary to provide a clearer comparison, clarifying whether the increased diversity brought by the decomposition of visual tags and user control strategies is more important, or whether it is the generation of more fine-grained semantic captions that matters.\\n\\n\\n**Response 3**:\\nCompared to VeCLIP and LaCLIP, the key difference is that our CtrlSynth system has more fine-grained visual tags and more diverse synthesis paths. In the ablation study (Section 4.5, Table 8), we provide evidence that the improvements of CtrlSynth come from both the use of more fine-grained semantic captions and the diverse text-image synthesis path. We also present the table below:\\n\\n| Study | Model | Tags | Samples | Common Tasks | ImageNet-1K | SugarCrepe |\\n|-----------------------------------|----------------------|--------------|----------------------|--------------|-------------|------------|\\n| **Models** | | | | | | |\\n| | Qwen2-7B, SDXL | - | - | 24.7 | 23.5 | 65.1 |\\n| | Qwen2-7B, SD3M | - | - | 26.1 | 23.8 | 65.2 |\\n| | Mistral-Nemo, SD3M | - | - | 26.6 | 25.1 | 68.1 |\\n| **Tags** | | | | | | |\\n| | - | Obj | - | 26.4 | 24.7 | 64.3 |\\n| | - | Obj+Attr | - | 26.2 | 24.8 | 65.4 |\\n| **Samples** | | | | | | |\\n| | - | - | CtrlSynth-cap, SP(1) | 26.2 | 24.5 | 67.2 |\\n| | - | - | CtrlSynth-img, SP(4) | 22.1 | 21.8 | 64.4 |\\n| | - | - | CtrlSynth-capimg, SP(3) | 26.5 | 24.8 | 67.5 |\\n| **CtrlSynth** | Mistral-Nemo, SDXL | Obj+Attr+Rel | CtrlSynth-mix | 27.1 | 25.3 | 68.5 |\"}",
"{\"metareview\": \"The submission proposes CtrlSynth, a controllable pipeline to generate synthetic images for representation learning. The broad workings are as follows: 1) Given a real image and optional associated caption, image tags are generated using a vision model; 2) Using an LLM, a new caption can be generated from the original caption and tags, remixed using user instructions; 3) Using the new caption, a text-to-image model is used to generate a synthetic image; 4) This synthetic image can be used to train other vision models or fed back into this pipeline to generate more images.\\nThe authors compare the quality of CLIP models trained on two datasets - 1) A dataset of real images, and 2) A dataset containing a mix of real and synthetic images; and show that CtrlSynth helps improve accuracy on multiple tasks.\\n\\nThe submission received ratings of 3, 6, 5, 6. Some key weaknesses highlighted by the reviewers include:\\n1) Lack of demonstrations on whether the described image editing workflow is helpful\\n2) Missing details in the submission including on self-filtering\\n3) Lack of demonstration showing that synthetic data helps when a larger dataset of real images is available.\\n\\nThe AC would like to note that prior work (which has not been cited) has demonstrated similar results, and reduced the novelty of the current submission:\\n1) StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners (NeurIPS 2023): They investigate the accuracy of visual representations from simCLR and CLIP trained on synthetic images generated by text-to-image models.\\n2) Synthetic Data from Diffusion Models Improves ImageNet Classification (TMLR 2023): \\\"augmenting the ImageNet training set with samples from a generative diffusion model can yield substantial improvements in ImageNet classification accuracy ...\\\"\\n\\nFollowing the discussion with reviewers, reviewer L3Nm (who gave a rating of 6) was not opposed to rejection given the missing experiments in the submission.\\n\\nTaking everything into account, especially prior work showing the efficacy of synthetic images, the only major contribution of this submission is the real image -> caption -> modified caption -> synthetic image pipeline, which does not meet the bar for acceptance. The ACs thus recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"In the discussion with reviewers, some flaws in the submission stood out:\\n1) No results or comparisons with CLIP models trained on larger real image sets. Results were only shown on datasets of ~12M samples, whereas publicly available CLIP models use 400M+ images.\\n2) Insufficient demonstrations of claimed capabilities such as editing and their impact.\\n\\nPoint 1) is especially important. However, the uncited prior work (which I have listed above) has demonstrated very similar experiments, thereby reducing the novelty of this submission.\"}",
"{\"comment\": \"Thank you for the engaging discussion. We understand that image generation demands significantly more GPU resources compared to other synthesis methods. While it might appear optional for certain multimodal use cases, it is necessary for long-tail vision tasks where text synthesis alone falls short. The primary computational bottleneck in image generation lies in the multi-step diffusion process (28 steps in our case). However, techniques like SDXL-Lightning demonstrate that a 4-step diffusion process can produce similar-quality images. Leveraging such methods could potentially reduce total GPU hours from 4608 to just 660. Our immediate goal is to validate image synthesis within the CtrlSynth closed-loop pipeline, and we will include discussions on optimizing synthesis efficiency to support practical deployment scenarios.\\n\\nText-only synthesis is a key component in our CtrlSynth pipeline, and we have validated the effectiveness of our text synthesis methods, specifically, Tables 5 and 6 highlight how our decompose-and-recompose feature for visual tags surpasses previous text-only synthesis methods such as VeCLIP and LaCLIP. Additionally, our ablation study (Table 8) confirms that incorporating visual tags significantly enhances both the diversity and quality of synthetic text.\\n\\nPlease let us know if you still have concerns, we are happy to make further clarifications.\"}",
"{\"comment\": \"> They say that the language model takes an instruction on how to generate a caption given the visual tags. They show some examples in Appendix A1. The instructions don't mention any editing, it mostly just says to describe the image better. In that case, do the gains come from some hallucination in the LLM caption that makes varied images?\\n\\n**Response 1**: We appreciate the reviewer\\u2019s feedback and have provided additional clarification in our revised version. \\n\\nSpecifically, examples of text and image synthesis instruction templates can be found in Appendix A1. To be clear, our approach does not involve asking the models to perform editing tasks, nor do our instructions include editing steps, as these would be highly dependent on specific use cases. Instead, we explain that users have the option to modify visual tags manually (e.g., by adding or removing tags) and subsequently incorporate these edited tags into the instructions. The automation of editing instructions is beyond the current scope and is earmarked for future work.\\nThe performance improvements we report are derived from the model's ability to generate novel combinations of visual tags. This capability, which we refer to as semantic augmentation, is an intentional feature of our method. We have updated the text in Section 3.2 (line 219) to better articulate this point.\\n\\n>Have the authors tried any other variation of editing instructions? Is there any analysis on the kinds of image editing prompted by the text that improve performance more? Are there specific prompts that serve as better negatives when tuning the CLIP contrastive loss?\\n\\n**Response 2**: To clarify, our CtrlSynth pipeline is designed to support various types of editing instructions in a plug-and-play manner. We have empirically observed that different kinds of image editing instructions yield similar performance outcomes. However, our synthetic pipeline is highly extensible and can readily accommodate more sophisticated or optimized instructions, should future research provide a more systematic evaluation or improved methodologies.\\nAdditionally, we did not experiment with enhanced negative samples for CLIP training. Our primary objective was to demonstrate the sample efficiency of our approach, and therefore, we adhered to the original baseline setup used in CLIP training to ensure a fair comparison.\\n\\n>There are other works that edit images based on text instructions like instruct pic to pic, magic brush etc. It might have been nice to see to see if editing certain things in images based on the LLM prompts is better than just using SD to generate since SD can often lack accuracy in generating the correct attribute object relation compositions.\\n\\n**Response 3**: Thank you for highlighting the relevant papers. We have revised Section 2 (line 124) to include a discussion of InstructPix2Pix and MagicBrush in the related work. Prior image editing methods, such as InstructPix2Pix and MagicBrush, contribute valuable techniques and datasets aimed at enabling precise control over image generation. While our image synthesis approach could certainly benefit from these advancements, our primary focus remains on enabling diverse data synthesis. We acknowledge that automatically generating image editing instructions for each sample in a dataset is an open research question, which we hope future work will address.\\n\\n>Nit: There are several works that either generate synthetic images based on the dataset they want to target (https://arxiv.org/pdf/2406.05184), or for cross domain retrieval (https://arxiv.org/pdf/2401.00420). A discussion for comparison could be nice.\\n\\n**Response 4**: Thanks for pointing out the related papers. We added the work in the related work section. Our pipeline can also be combined with previous work (https://arxiv.org/pdf/2406.05184) to improve the performance of cross-domain retrieval tasks or when the target task has little real data to retrieve (https://arxiv.org/pdf/2401.00420).\"}",
"{\"comment\": \">Practical concerns: By using several models, such as the vision tagging model, LLM, and diffusion model, the proposed method might not be efficient for scaling up to larger datasets, particularly considering the time cost associated with image synthesis.\\n>The assumption behind CtrlSynth is based on a fixed number of data samples, where the method helps a model achieve better performance than training on the original dataset. However, given the recent trends in LLM and multimodal LLM research, where pretraining data continues to scale up, the proposed method may not be scalable for very large datasets. While this is a challenge, under the current setting in the paper, CtrlSynth is indeed effective.\\n\\n**Response 1**:\\nThank you for acknowledging the effectiveness of our method in the current setting. Our primary goal is to demonstrate the effectiveness of CtrlSynth under a fixed number of data samples, and we have shown that our approach can enhance model performance compared to training solely on the original dataset. We agree that the efficiency of CtrlSynth is constrained by the resources required for image synthesis, particularly as datasets grow larger. Nonetheless, our method provides a valuable step forward within the existing setting, and future work could explore optimizations or alternative strategies to address these scalability concerns.\\n\\n>Can the authors provide details on the overall efficiency of the proposed pipeline? For example, how long does it take to generate 1 million images along with their captions? It would also be good to know the time cost of each component, e.g. vision tagging, caption generation, image generation. A more complete picture of the efficiency in the pipeline would better help to assess the value of this work.\\n\\n**Response 2**: we will include detailed efficiency information in the revised paper. Below is a summary of the GPU hours (using H100) required for processing 1 million images and captions:\\n- Visual tagging: 89 GPU hours (52 for Florence, 16 for CatLIP, 21 for Qwn2 extraction)\\n- Caption generation: 32 GPU hours for running Mistral-Nemo inference\\n- Image generation: 4608 GPU hours for running SDXL\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}"
]
} |
1X1R7P6yzt | Discrete GCBF Proximal Policy Optimization for Multi-agent Safe Optimal Control | [
"Songyuan Zhang",
"Oswin So",
"Mitchell Black",
"Chuchu Fan"
] | Control policies that can achieve high task performance and satisfy safety constraints are desirable for any system, including multi-agent systems (MAS). One promising technique for ensuring the safety of MAS is distributed control barrier functions (CBF). However, it is difficult to design distributed CBF-based policies for MAS that can tackle unknown discrete-time dynamics, partial observability, changing neighborhoods, and input constraints, especially when a distributed high-performance nominal policy that can achieve the task is unavailable. To tackle these challenges, we propose **DGPPO**, a new framework that *simultaneously* learns both a *discrete* graph CBF which handles neighborhood changes and input constraints, and a distributed high-performance safe policy for MAS with unknown discrete-time dynamics.
We empirically validate our claims on a suite of multi-agent tasks spanning three different simulation engines. The results suggest that, compared with existing methods, our DGPPO framework obtains policies that achieve high task performance (matching baselines that ignore the safety constraints), and high safety rates (matching the most conservative baselines), with a *constant* set of hyperparameters across all environments. | [
"control barrier functions",
"multi-agent systems",
"black-box systems",
"partial observability",
"reinforcement learning"
] | Accept (Poster) | https://openreview.net/pdf?id=1X1R7P6yzt | https://openreview.net/forum?id=1X1R7P6yzt | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yumvGW7Ag7",
"y2mdiaVIMd",
"vTEyuCLKxP",
"qwfdVP0LgJ",
"qhtH3laQut",
"pt4Jq1ZIqi",
"mh5ZCMPHHl",
"k4f5jFDD8l",
"ZnSR8EvI0k",
"ZGRIBRHsy0",
"Vqvm3SAoot",
"PsqcFytiUP",
"Nh7HWgmAeg",
"IJbf5oQvbp",
"GR4yX3WcHb",
"Fds6UYBuw1",
"ETpEQScRg8",
"BUnHW0bYKS",
"9bBpyPxHSU",
"9Aikif2nDm",
"4jGauMZCU9"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732343559312,
1732820572507,
1732341606869,
1737524169953,
1730572395648,
1732679492513,
1732740184732,
1732345498402,
1732564789528,
1732341289621,
1732341446952,
1729687388176,
1732345679089,
1734740441225,
1732735010746,
1732343691241,
1732345292686,
1732345613412,
1732339558749,
1730665701093,
1732560551624
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12158/Reviewer_XeoF"
],
[
"ICLR.cc/2025/Conference/Submission12158/Reviewer_LZax"
],
[
"ICLR.cc/2025/Conference/Submission12158/Reviewer_uB5M"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Reviewer_LZax"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Area_Chair_YiAy"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12158/Reviewer_uB5M"
],
[
"ICLR.cc/2025/Conference/Submission12158/Reviewer_XeoF"
]
],
"structured_content_str": [
"{\"title\": \"Author Reply (1/2)\", \"comment\": \"We thank the reviewers for finding our approach innovative and well-motivated, and for acknowledging our theoretical presentations. **There are important misunderstandings that we hope to have clarified in both our revised manuscript and our response below**.\\n\\nWe hope that our responses below and our revised manuscript address the concerns raised by the reviewer.\\n\\n## Summary\\n\\nIn brief, we have:\\n\\n1. Clarified that the \\\"cost\\\" in our paper is the same as the \\\"negative reward\\\" in the CMDP setting\\n2. Provided additional experimental results showing that DGPPO can generalize to 512 agents\\n3. Provided experimental results to clarify that larger $\\\\nu$ of DGPPO only affects the convergence rate and not the converged performance\\n4. Discussed the sampling efficiency of DGPPO\\n\\n## Detailed Reply\\n\\n> **Weakness 1, Question 1 (1/2):** Without metrics like mean global success rate or reward, it is unclear if the agents are merely achieving safety or are successfully accomplishing the global objective.\\n\\n**R:** **We already report the global reward $\\\\textbf{global reward} = - \\\\textbf{global cost}$**. This may have been a misunderstanding.\\n\\nIn our paper, the joint cost $l$ is task-oriented, equivalent to the negative reward, and **unrelated to the safety constraints**.\\nThis follows the convention in the optimal control community (e.g., [1]) and is **NOT** the cost in the CMDP setting. \\n\\nFor example, in Appendix B.2.1 of our original submission (Appendix C.2.1 of the revision), we have provided detailed equations for the cost, which are mainly distances of the agents to their goals. Therefore, minimizing costs drives the agents to their goals.\\n\\nTo prevent confusion, **we clarify that the cost $l$ is task-oriented and not safety-oriented** in Section 3.1 of the revised paper:\\n\\n- \\\"Let the joint cost function $\\\\text{describing the desired task}$ be denoted as $l: \\\\mathcal{X} \\\\times\\\\mathcal U\\\\to\\\\mathbb R$\\\"\", \"we_have_also_added_a_footnote_in_the_revised_paper_to_explain_the_relationship_with_the_cmdp_setting\": \"- \\\"The cost function l here is **not** the _cost_ in CMDP. Rather, it corresponds to the _negative reward_ in CMDP.\\\"\\n\\nConsequently, **the results (e.g., Figure 3, 4, 5, 6 in Section 5) _do_ show metrics for accomplishing the global objective**.\\nFor example, in Figure 3, since achieving the objective (by minimizing the cost) and achieving safety can be conflicting objectives, some baseline methods have high task performance (i.e., low cost) but low safety rates, while some baseline methods have low task performance (i.e., high cost) but high safety rates.\\n\\n---\\n\\n> **Weakness 1, Question 1 (2/2):** Does the proposed framework present mechanisms to ensure that safety constraints do not dominate the objective?\\n\\n**R:** **Yes**, but only if the safety constraints remain satisfied.\\n\\nThis can be seen in Equation (20) or Figure 1 in the original submission:\\n- At states $\\\\mathbf{x}$ where the safety constraints are satisfied ($\\\\tilde C^{(m)}(\\\\mathbf{x}) \\\\leq 0$), we use the gradient of the objective function\\n- Otherwise, we use the gradient of the constraint\\n\\nHence, if the safety constraints are all satisfied, then only the gradient of the objective function will be used.\"}",
"{\"title\": \"Thank you very much for increasing your score!\", \"comment\": \"Thank you very much for increasing your score! Your valuable questions have greatly improved the presentation and clarity of our paper!\"}",
"{\"title\": \"Author Reply (3/3)\", \"comment\": \"> **Weakness 2:** The stability of DGPPO compared to the baselines does not seem appropriately explained. Is it a purely empirical observation or is there some theoretical justification available?\\n\\n**R:** The improvements to training stability are mostly from empirical observation. \\n\\nHowever, this does agree with prior literature which has pointed out the instability of Lagrangian-based methods in the _zero_ constraint threshold setting [A, B] and the improved training stability of purely primal methods such as CRPO (which we use) [C, D, E]. \\n\\n---\\n\\n> **Weakness 3:** Why is the assumption of unknown dynamics interesting? The environments used seem to be the same as in [1]. It would be a better idea to consider environments where the dynamics are more complicated than [1] (e.g., common MuJoCo robots).\\n\\n**R:** We respectfully disagree with the reviewer for the following reasons:\\n\\n1. The environments used are **not** the same as in [1]. [1] considers only double integrator dynamics and unicycle dynamics, while we also consider the bicycle dynamics (`Bicycle` environment) and contact dynamics (`Transport`, `Wheel`, `Transport2` environments). \\n2. We **already** consider environments with **more complicated dynamcs**. For example, the `Transport`, `Wheel`, `Transport2` environments use the MuJoCo and VMAS simulators for **contact dynamics** and are not used in [1]. Contact dynamics are a big reason why it is important to handle unknown discrete-time dynamics due to their **discrete** nature, which prevents the application of continuous-time methods such as [1]. \\n\\nHowever, we **agree** that considering more complicated dynamics can strengthen our empirical evaluation.\\n**We have added new experiments on the HalfCheetah dynamics from the Safe Multi-agent MuJoCo benchmark [4] to the revised manuscript** in Appendix C.7.\\n\\n## References\\n\\n[1] GCBF+: A Neural Graph Control Barrier Function Framework for Distributed Safe Multi-Agent Control, Zhang et al, T-RO, 2024.\\n\\n[2] Grover, Jaskaran Singh, Changliu Liu, and Katia Sycara. \\\"Deadlock analysis and resolution for multi-robot systems.\\\" Algorithmic Foundations of Robotics XIV: Proceedings of the Fourteenth Workshop on the Algorithmic Foundations of Robotics 14. Springer International Publishing, 2021.\\n\\n[3] Matteo Bettini, Amanda Prorok, and Vincent Moens. Benchmarl: Benchmarking multi-agent reinforcement learning. Journal of Machine Learning Research, 25(217):1\\u201310, 2024.\\n\\n[4] Shangding Gu, Jakub Grudzien Kuba, Yuanpei Chen, Yali Du, Long Yang, Alois Knoll, and Yaodong Yang. Safe multi-agent reinforcement learning for multi-robot control. Artificial Intelligence, 319:103905, 2023.\\n\\n[A] Mario Zanon and S\\u00e9bastien Gros. Safe reinforcement learning using robust mpc. IEEE Transactions on Automatic Control, 66(8):3638\\u20133652, 2020.\\n\\n[B] Tairan He, Weiye Zhao, and Changliu Liu. Autocost: Evolving intrinsic cost for zero-violation reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 14847\\u201314855, 2023.\\n\\n[C] Tengyu Xu, Yingbin Liang, and Guanghui Lan. Crpo: A new approach for safe reinforcement\\nlearning with convergence guarantee. In International Conference on Machine Learning, pp.\\n11480\\u201311491. PMLR, 2021.\\n\\n[D] Guan, Jiayi, et al. \\\"POCE: Primal Policy Optimization with Conservative Estimation for Multi-constraint Offline Reinforcement Learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[E] Gu, Shangding, et al. \\\"Balance reward and safety optimization for safe reinforcement learning: A perspective of gradient manipulation.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 19. 2024.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"The paper introduces a novel framework, Discrete Graph Control Barrier Functions Proximal Policy Optimization (DGPPO), for ensuring safe control in multi-agent systems (MAS) operating under unknown discrete-time dynamics and input constraints. Unlike prior approaches that rely on continuous-time models and known nominal policies, DGPPO incorporates discrete CBFs (DCBFs) and reinforcement learning to dynamically learn both a high-performance safe policy and a discrete graph CBF (DGCBF). Through extensive empirical validation across various simulated MAS environments, DGPPO claims to achieve both high task performance and safety without the need for hyperparameter adjustments specific to each environment.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors present an innovative combination of reinforcement learning and discrete-time CBFs to address the challenges of safety and task performance in MAS with unknown dynamics. The extension to the DCBF framework in discrete-time and the introduction of DGCBFs allow for neighborhood adaptability, overcoming limitations associated with continuous-time CBFs. The approach is well-motivated, tackling safety in unknown environments without requiring a predefined nominal policy\\u2014a substantial improvement for multi-agent reinforcement learning (MARL). I particularly appreciate the rigorous theoretical presentation to support the proposed approach.\", \"weaknesses\": \"While DGPPO introduces a novel safety mechanism for MAS, nonetheless I believe there are few critical concerns that could limit the effectiveness and general applicability of the approach.\\n\\n**Lack of clear performance metrics**: \\n\\nwhile DGPPO is shown to achieve high safety, the empirical results focus primarily on safety metrics and the minimization of constraints. It remains unclear if the agents are successfully accomplishing global objectives. Without metrics like mean global success rate or reward, it is difficult to assess if the agents are merely achieving safety (e.g., by staying stationary) rather than making meaningful progress toward the task goals while satisfying the safety constraints. This is especially relevant as the DGPPO framework does not incorporate a nominal policy, meaning that without these metrics, the experiments risks overlooking cases where the agents avoid unsafe states at the expense of task completion. Does the proposed framework present mechanisms to ensure that safety constraints do not excessively dominate the objective?\\n\\n**Limited scalability experiments**: \\n\\nthe authors state that DGPPO scale well w.r.t to the baseline approach tested, however testing 5 agents instead of 3 as in the original experiment, I believe it is too limited to claim scalability of the proposed approach. Crucially, as stated from the authors themselves the proposed approach requires both stochastic and deterministic rollouts to enforce DGCBFs. While this approach ensures safety in discrete-time settings, it also introduces significant sample inefficiency, which may limit the framework\\u2019s scalability to larger or more complex MAS. Hence, an extensive test with for instance 10 or 15 agents would strength the results of the paper.\\nMoreover while the authors employ GNNs with attention mechanisms to handle changing agent neighborhoods, the computational complexity of GNNs in larger MAS could become important. In high-density environments with frequent neighborhood changes, maintaining an updated and accurate DGCBF through GNNs could pose significant computational challenges, possibly impacting real-time applicability. A detailed discussion on the scalability of GNN-based policies for larger agent systems would add valuable context to the method\\u2019s limitations.\\n\\n**Dependence on hyperparameter $\\\\nu$ for constraint enforcement**: \\n\\nthe authors claim on the fact that DGPPO is less sensitive to hyperparameters does not seem to be properly backed up. From the plot in Fig. 6b the value of $\\\\nu$\\u2014responsible for controlling the weight on constraint minimization steps\\u2014significantly impacts performance. Misalignment in $\\\\nu$ could lead to either overly conservative or unsafe policies, showing that DGPPO still requires careful tuning, contrary to its stated hyperparameter robustness.\", \"questions\": \"Q1: How does DGPPO ensure that agents achieve the global objective, rather than just meeting safety constraints?\", \"q2\": \"Could DGPPO\\u2019s rollout requirements be reduced to improve sample efficiency without compromising safety?\", \"q3\": \"What are the practical scalability limits of DGPPO when applied to larger MAS, particularly with the use of GNNs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your detailed answers. I have decided to raise my score.\"}",
"{\"title\": \"Thanks for the clarifications and detailed response\", \"comment\": \"I appreciate the detailed response provided by the authors and the additional clarifications (such as generalizability and scalability which I had overlooked). I have increased my score appropriately.\"}",
"{\"title\": \"Author Reply (2/4)\", \"comment\": \"> **Q4 (1/4):** Theorem 3 is difficult to understand. What is the purpose of giving Theorem 3?\\n\\n**R:** Thanks for the feedback!\\nTheorem 3 provides a way of computing an approximate *projected* objective function gradient ($g$) so that it does not interfere with the gradients from the constraint ($\\\\sigma^{(m)}$).\\n\\nThis allows us to borrow ideas from multi-objective optimization to improve the sample efficiency as compared to the CRPO-style of constrained optimization.\\n\\nTo improve the clarity of Theorem 3, we have moved the formal statement of Theorem 3 into the Appendix (Theorem A2 in Appendix). Instead, we put an informal statement of Theorem 3 (Informal Theorem 3 in Section 4.3) in the revised version that more clearly communicates the above idea.\\n\\nAlso, we have provided additional discussion on Theorem 3 in the revised manuscript (paragraph above (14)).\\n\\n---\\n\\n> **Q4 (2/4):** The orthogonality assumption in Theorem 3 is impractical.\\n\\n**R:** While we do agree that the orthogonality assumption is impractical, it does allow us to derive a policy loss function that outperforms other alternatives (i.e., ablation study in Appendix B.6.1 of the original submission or Appendix C.6.1 of the revised version).\\n\\nDue to the unrealistic assumptions, we view Theorem 3 not as a core theoretical contribution but rather as a principled way to motivate design choices that behave well empirically, as showcased by our extensive empirical results.\\n\\n---\\n\\n> **Q4 (3/4):** In Theorem 3, the expression for $\\\\sigma$ uses a state distribution $\\\\rho$ independent of $\\\\theta$, but the expression for $g$ uses the stationary state distribution (which is a function of $\\\\theta$).\\n\\n**R:** Good observation! The situation here is the same as in our response to **Q3**.\\nSpecifically, there is a difference between $\\\\nabla_\\\\theta \\\\mathbb{E}\\\\_{\\\\mathbf{x} \\\\sim \\\\rho^{\\\\pi_\\\\theta}}$\\nwhich requires the **policy-gradient theorem** to compute the gradient, and$\\\\nabla_\\\\theta \\\\mathbb{E}\\\\_{\\\\mathbf{x} \\\\sim \\\\rho} \\\\Bigg|\\\\_{\\\\rho = \\\\rho^{\\\\pi_\\\\theta}}$, which uses the **score function gradient**. \\n\\nWe have included a detailed discussion on this in Appendix B.2 in the revised version.\\n\\n---\\n\\n> **Q4 (4/4):** What is the relationship between (13) and (14)? \\n\\n**R:** We realize that the derivation of the policy loss $L$ from Theorem 3 could be made clearer, and **have added a detailed derivation of the policy loss** to Appendix B in the revised version.\\n\\nAlso, to make the connection more straightforward, we change the old (14) to use an indicator function instead of having two cases for the safe and unsafe cases, i.e.,\\n$$\\n\\\\begin{align}\\n L(\\\\theta)\\n &= \\\\mathbb{E}\\\\_{\\\\mathbf{x} \\\\sim \\\\psi(\\\\rho^{\\\\pi_\\\\theta})}\\n \\\\mathbb{E}\\\\_{\\\\mathbf{u} \\\\sim \\\\psi(\\\\pi\\\\_\\\\theta(\\\\cdot|\\\\mathbf{x}))}\\n \\\\big[ \\\\log \\\\pi\\\\_\\\\theta(\\\\mathbf{x}, \\\\mathbf{u}) \\\\psi\\\\big( \\\\tilde{Q}(\\\\mathbf{x}, \\\\mathbf{u}, \\\\theta) \\\\big) \\\\big], \\\\\\\\\\\\\\\\\\n %%%%%%%\\n \\\\tilde{Q}(\\\\mathbf{x}, \\\\mathbf{u}, \\\\theta) &:=\\n \\\\mathbb{1}\\\\_{ \\\\\\\\{\\\\max_m \\\\tilde{C}^{(m)}\\\\_\\\\theta(\\\\mathbf{x}) \\\\leq 0 \\\\\\\\} } \\\\psi(Q^{\\\\pi_\\\\theta}(\\\\mathbf{x}, \\\\mathbf{u})) + \\\\nu \\\\max\\\\_m \\\\tilde{C}^{(m)}(\\\\mathbf{x}, \\\\mathbf{u}).\\n\\\\end{align}\\n$$\"}",
"{\"title\": \"Thank you very much for raising your score!\", \"comment\": \"Thank you very much for raising your score!\\n\\nSince your current score is borderline accept, **would you please let us know what concerns are holding you back from further raising the score?**\\nWe are more than happy to address them.\"}",
"{\"title\": \"Author Reply (1/3)\", \"comment\": \"We thank the reviewer for acknowledging our DGPPO as \\\"an elegant way to solve the discrete-time distributed MASOCP (multi-agent safe optimal control problem) with unknown environment dynamics\\\", and for acknowledging our theoretical analysis.\\nWe hope that our responses below and our revised manuscript address the concerns raised by the reviewer.\\n\\n## Summary\\n\\nIn brief, we have:\\n\\n1. Explained that designing a nominal policy requires simple or known dynamics and can lead to deadlocks;\\n2. Added more experiments to show that the DGPPO policy also generalizes to more agents (512) after training;\\n3. Clarified some details regarding DGPPO including the sampling efficiency, hyperparameter sensitivity, and the definition of the avoid set;\\n4. Clarified the details of the environments in our experiments and the results using more figures;\\n5. Explained the difference in dynamics considered in our paper compared with [1].\\n\\n## Detailed Reply\\n\\n> **Question 1:** Why is the dependency of the algorithm on a nominal policy a bad idea in the given setting? Since it appears easy enough to construct one (say a PID controller like in [1]) for the environments given, is this the right direction?\\n\\n**R:** Good question! There are 2 reasons why depending on a nominal policy is a drawback.\\n\\n#### 1. Requirement of Simple or Known Dynamics\\nController design usually requires the dynamics to be simple or requires knowledge of the dynamics.\\nThe PID controllers in [1] are constructed for the unicycle dynamics. More generally, PID controllers are usually only used with single-input single-output systems. For more complicated systems, one could use LQR or MPC, but this requires **full knowledge** of the dynamics. In addition, PID controllers are much more difficult to apply in environments with complex **contact dynamics**, for example, our `Transport`, `Wheel`, and `Transport2` environments. \\n\\n#### 2. Deadlocks\\nAnother drawback is that the CBF-QP approach of [1] leads to deadlocks, as discussed in Section VIII of [1] or theoretically in e.g. [2].\\nThis is because the safety filter approach of [1] only minimizes deviations from the nominal policy at the current time step, even if this leads to a deadlock at a future time step. \\n\\nIn contrast, minimizing the cumulative cost directly takes future behavior into account and hence will try to avoid deadlocks.\\nTo investigate this, **we have performed an additional experiment in Appendix D.1 in the revised manuscript**, where the approach from [1] results in a deadlock while our method successfully completes the task (See Figure 15 in the revised manuscript).\\n \\nWe have added the above discussion on the limitations of assuming a nominal policy in Appendix D.1 of the revised manuscript.\\n\\n---\\n\\n> **Question 2, Weakness 1:** Scalability appears limited (only up to 7 agents) compared to the continuous time setting of GCBF+ [1]. Can the algorithm generalize to larger numbers of agents? Does the algorithm need to be retrained for every new number of agents?\\n\\n**R:** Good observation! We wish to make a distinction between the **scalability** (number of agents during training) and **generalizability** (ability to be deployed with more agents during test time).\\n\\nScalability-wise, GCBF+ also _trains_ on 8 agents (Section VI of [1]), which is similar to the number of agents considered in our training.\\nGeneralizability-wise, GCBF+ can be deployed on 512 agents without significant performance loss after training.\\n\\nWe have performed new experiments and find that **DGPPO is also able to generalize and be deployed on 512 agents** after being trained on 8 agents. The following table shows the safety rates and the normalized cost (w.r.t. traveling distance and number of agents). We can observe that DGPPO maintains high safety rates and low costs when deployed on up to 512 agents.\\n\\n|Number of agents|Safety rate|Normalized cost|\\n|---|---|---|\\n|8|$1.000\\\\pm0.000$|$1.673\\\\pm0.430$|\\n|16|$0.992\\\\pm0.088$|$1.784\\\\pm0.316$|\\n|32|$0.987\\\\pm0.112$|$1.748\\\\pm0.235$|\\n|64|$0.986\\\\pm0.118$|$1.799\\\\pm0.418$|\\n|128|$0.982\\\\pm0.133$|$1.839\\\\pm0.323$|\\n|256|$0.985\\\\pm0.122$|$1.823\\\\pm0.366$|\\n|512|$0.985\\\\pm0.123$|$1.821\\\\pm0.390$|\\n\\nFrom the results above, we can conclude that DGPPO does **not** need to be retrained when deploying for larger numbers of agents. \\n\\nWe have added the above results in Appendix C.8 of the revised manuscript.\"}",
"{\"title\": \"Author Reply (2/3)\", \"comment\": \"> **Question 3 (1/4):** With regards to the sample efficiency and computation requirements, how is DGPPO w.r.t. the baselines (I noticed the training time was listed as 12 hours on the reference specifications)?\\n\\n**R:** Although DGPPO introduces a deterministic rollout which doubles the number of environment samples for each update step, we have validated that the performance improvements of DGPPO are not due to the additional sample use in Appendix B.6.2 of the original version (C.6.2 of the revised version) by **giving the baseline methods double the sample budget**.\\n\\nIn terms of computation time, we find that DGPPO **runs faster** (12 hours) compared to Lagrangian methods (14 hours). This is because Lagrangian methods require an _additional_ backpropagation step to update the Lagrange multiplier which seems to dominate the additional time needed to perform the deterministic rollout.\\nHowever, the deterministic rollout still results in additional overhead compared to the baseline Penalty methods (10 hours). \\n\\nWe have added the training times of the baselines to Appendix C.1 of the revised manuscript.\\n\\n---\\n\\n> **Question 3 (2/4):** On a related note, how is the benefit of a constant set of hyperparameters demonstrated? Can we confidently say the hyperparameter search for the baselines takes significantly longer (in wall clock time on a comparable machine)?\\n \\n**R:** The benefit of a constant set of hyperparameters is discussed in Paragraph \\\"(Q1): DGPPO has the best performance and is hyperparameter insensitive.\\\" in Section 5.2 of the original submission.\\n\\nWe observe that for fixed hyperparameters, the performance of baseline methods varies greatly when the environment changes. Consequently, changes in the environment require the hyperparameters to be fine-tuned again.\\n\\nAssuming a budget of 10 runs for hyperparameter optimization for the baseline methods and a training time of 10-14 hours per run, this implies a total time of 100-140 hours. This is **significantly longer** than our method which runs in only 12 hours.\\n \\n---\\n \\n> **Question 3 (3/4):** What are the restrictions on the definition of the avoid set and the assumptions on the function?\\n\\n**R:** The only requirement is that the safety specification function $h_i$ can be written as a function of agent $i$'s **local observation** (Line 127 in original submission).\\n\\n---\\n\\n> **Question 3 (4/4):** Do the avoid sets primarily represent distance greater than some safe radius?\\n\\nMany of our environments happen to have avoid sets defined using distance in the position space simply due to distance-based safety being common.\\nUsing distance-based avoid sets is **not** a restriction.\\nFor example, the avoid set of the `Wheel` environment is defined using angular distance, i.e., the angle of the wheel cannot be within some region. \\n\\n---\\n\\n> **Question 4:** Is LiDAR only used in the LiDAR environments? How are the obstacles in the VMAS environments represented to the agents?\\n\\n**R:** **Yes**, only the LiDAR environments use LiDAR. The obstacles in the VMAS environments are represented using states including their positions and sizes following the original code [2]. We have clarified this in Appendix C.2.3 of the revised manuscript.\\n\\n---\\n\\n> **Question 5:** The experiments with scalability to multiple agents (Fig. 5) appear quite close to the baselines. Is there a better comparison available?\\n\\n**R:** Thanks for the suggestion! To provide a clearer comparison, we **add a new plot** (Figure 14 in the appendix of the revised manuscript) that compares the cost and safety rate of the converged policies and observe that **DGPPO achieves a safety rate of near $100\\\\%$, similar to the most conservative baselines, while achieving half of their cost**.\"}",
"{\"summary\": \"This paper proposes a safe multi-agent reinforcement learning method based on distributed control barrier functions (CBFs) for multi-agent systems with limited perception capabilities. Simulation results on several multi-agent safe coordination tasks demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The multi-agent safe optimal control problem considered in this paper is both general and challenging, as neither the system model nor a nominal policy is available in advance.\\n\\n(2) The learned policy is safe, which does not require additional safety filters in implementation.\\n\\n(3) Extensive simulations are conducted, and state-of-the-art baselines are compared.\", \"weaknesses\": \"There exist several theoretical issues in the paper. The motivation of employing the discrete graph CBF is unclear. Some implementation details should be incorporated. See Questions part for more details.\", \"questions\": \"(1) The policy in this work only takes local observations as input, such that it is decentralized. Why do you refer to it as a distributed policy?\\n\\n(2) The modified constraint (11b) is too strict which cannot be satisfied by a lot of policy classes, such as the Gaussian policy.\\n\\n(3) In (12), please provide the explicit gradient formula when the safety condition is violated. Note that the authors provide a gradient in (41). Nevertheless, this gradient is not associated to the policy loss function (under safety violation) in (12).\\n\\n(4) Theorem 3 is very difficult to understand. The orthogonality assumption is impractical. The reviewers also find that the authors try to replace the stationary state distribution (which is a function associated to the policy parameter $\\\\theta$) with a constant state distribution in this theorem to obtain their gradient $g$ in (13). What is the purpose of giving Theorem 3? What is the relationship between (13) and (14)? \\n\\n(5) The reason of using discrete graph CBFs should be explained clearly. Note that we can regard the multi-agent system as a large but single agent. Then, you can directly use the discrete CBF given in Theorem 2 to learn safe policies. In this case, the distributed control nature can still be preserved as the learned observation-based policy is end-to-end.\\n\\n(6) Theorem 4 is hard to understand. What is the relationship between the discrete graph CBF and the discrete CBF? Similar to Theorem 1, it is important for the authors to show that the safe set is forward invariant based on the discrete graph CBF.\\n\\n(7) In (11b), the safety constraint is calculated using a stochastic policy $\\\\pi$. However, in Fig. 1, deterministic policies are used for estimating the discrete graph CBF.\\n\\n(8) Why do the agents have different GAEs in your algorithm? Are you suggesting that the agents are heterogeneous and that their local policies differ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Author Reply (4/4)\", \"comment\": \"> **Q7:** In (11b), the safety constraint is calculated using a stochastic policy $\\\\pi$. However, in Fig. 1, deterministic policies are used for estimating the discrete graph CBF.\\n\\n**R:** This observation is correct. DGPPO is **designed in this way** and it is not a mistake.\\n\\nWhile the CBF $V^{h^{(m)}}$ is **learned** using a deterministic policy, it is **evaluated** in the CBF constraint (11b) using a stochastic policy. Given a learned $V^{h^{(m)}}$ from a deterministic policy, the safety constraint in (11b) is calculated where the expression of $C^{(m)}$ (6b) depends on $B^{(m)} = V^{h^{(m)}}$.\\n\\nWe do this because\\n- $V^h$ is a DGCBF (Corollary 1), but the definition of the $V^h$ function requires a deterministic policy (Equation (7)).\\n- Computing gradients with unknown dynamics using either the policy-gradient theorem or score function gradients requires a stochastic policy. \\n\\nWe test the necessity of using a deterministic policy in the paragraph \\\"Learning $V^h$ with a stochastic policy\\\" in Section 5.3 of the original submission by using a stochastic policy to learn $V^h$ instead.\\nHowever, this degrades both the cost and safety rate.\\n\\n---\\n\\n> **Q8:** Why do the agents have different GAEs in your algorithm? Are you suggesting that the agents are heterogeneous and that their local policies differ?\\n\\n**R:** Thank you for the careful reading! This is a typo, and we have corrected this in the revised manuscript (Equation 21 in the revised manuscript), where the GAE (calculated from the cost) of all the agents are denoted as $A^\\\\mathrm{GAE}$, and the advantage used for updating agent $i$'s policy is defined as $\\\\tilde{A}_i$ (eighter equals to $A^\\\\mathrm{GAE}$ or $\\\\nu\\\\max_m\\\\hat{C}\\\\_{\\\\theta,i}^{(m)}$ depending on where the DGCBF constraint is satisfied). \\n\\n## References\\n\\n[1] Garg, Kunal, et al. \\\"Learning safe control for multi-robot systems: Methods, verification, and open challenges.\\\" Annual Reviews in Control 57 (2024): 100948.\\n\\n[2] Zhang, Kaiqing, et al. \\\"Fully decentralized multi-agent reinforcement learning with networked agents.\\\" International conference on machine learning. PMLR, 2018.\"}",
"{\"metareview\": \"Discrete GCBF Proximal Policy Optimization for Multi-agent Safe Optimal Control\", \"summary\": \"This paper proposes DGPPO, which integrates discrete graph control barrier functions (DGCBFs) with reinforcement learning to address the safety and performance challenges in multi-agent systems under unknown discrete-time dynamics and limited sensing conditions. The paper introduces discrete-time extensions of control barrier functions and leverages a policy optimization technique to jointly learn safe and performant policies without requiring a predefined nominal policy. DGPPO is evaluated across multiople simulation environments, demonstrating superior safety and cost performance compared to baselines while being robust to hyperparameter changes.\", \"comments\": \"We received 3 expert reviews, with the scores 6, 6, 8, and the average score is 6.67. The reviewers are generally positive about the algorithmic and technical contributions of this paper. The discrete graph formulation and their integration with reinforcement learning overcomes many limitations of continuous-time CBFs. The technical analysis is rigorous, with results that ensure forward invariance of safety constraints and address the challenges of unknown dynamics in MAS. Simulations demonstrate near-perfect safety rates as a validation of the proposed approach. The paper is also well-written.\\n\\nThe reviewers have also provided multiple comments to strengthen the paper. One main comment was to include additional experiments that demonstrate the scalability. Another comment is to further validate the claim that the proposed method is robust to hyperparameters changes. I am glad to see that the authors have already addressed these comments during the rebuttal and the reviewers have acknowledged their efforts. I recommend the authors update the paper by accommodating these suggestions.\", \"additional_comments_on_reviewer_discussion\": \"Please see the \\\"Comments\\\" in the meta-review.\"}",
"{\"title\": \"Thank you very much for raising your score!\", \"comment\": \"Thank you very much for raising your score! Your theoretical questions have greatly improved the theoretical rigor of our paper!\\n\\nIf you have any further concerns that are holding you back from further raising your score, please let us know! We are more than happy to address them.\"}",
"{\"title\": \"Author Reply (2/2)\", \"comment\": \"> **Weakness 2, Question 3 (1/3):** Does the need for both deterministic and stochastic rollouts limit the scalability of DGPPO to larger or more complex MAS?\\n\\n**R:** **No, this does not limit the scalability**.\\n\\nWhile the need for both deterministic and stochastic rollouts means that DGPPO requires twice the number of samples compared to baseline methods, we actually find **DGPPO takes similar or even less time than the baseline methods**.\\nIn particular, DGPPO takes 12 hours, while Lagrangian methods take 14 hours.\\nThis is because Lagrangian methods require an _additional_ backpropagation step to update the Lagrange multiplier, which dominates the additional time needed to perform the deterministic rollout.\\nHowever, the deterministic rollout still results in additional overhead compared to the baseline Penalty methods (10 hours). \\n\\nWe have added the training times of the baselines to Appendix C.1 of the revised manuscript.\\n\\n---\\n\\n> **Weakness 2, Question 3 (2/3):** An extensive test with for instance 10 or 15 agents would strengthen the results of the paper. \\n\\nThank you for the suggestion! We actually go even further and **perform additional experiments where we keep increasing the number of agents until 512** and find that DGPPO can maintain high safety rates and low costs even on much larger MAS. We show the results in the table below.\\n\\n|Number of agents|Safety rate|Normalized cost|\\n|---|---|---|\\n|8|$1.000\\\\pm0.000$|$1.673\\\\pm0.430$|\\n|16|$0.992\\\\pm0.088$|$1.784\\\\pm0.316$|\\n|32|$0.987\\\\pm0.112$|$1.748\\\\pm0.235$|\\n|64|$0.986\\\\pm0.118$|$1.799\\\\pm0.418$|\\n|128|$0.982\\\\pm0.133$|$1.839\\\\pm0.323$|\\n|256|$0.985\\\\pm0.122$|$1.823\\\\pm0.366$|\\n|512|$0.985\\\\pm0.123$|$1.821\\\\pm0.390$|\\n\\nThese results have been added to Appendix C.8 of the revised manuscript.\\n\\n---\\n\\n> **Weakness 2, Question 3 (3/3):** Does the use of GNNs pose computational challenges with larger MAS?\\n\\n**R:** **No**. We have shown above that DGPPO can be applied to 512 agents. In addition, GNNs not only do **not** pose computational challenges with larger MAS, but also **enable** our algorithm to be applied to larger MAS because of its ability to handle changing neighborhoods, large numbers of neighbors, etc.\\n\\n\\n---\\n\\n> **Weakness 3:** From Figure 6b, the performance of DGPPO seems to be sensitive to the weight $\\\\nu$ of the constraint minimization step. \\n\\n**R:** This is a good observation.\\nHowever, we wish to make the following two points.\\n\\n#### 1. The performance of Converged Policy is Unaffected\\nLarger $\\\\nu$ **only** affects the convergence rate of the cost and **not** the performance after convergence as long as $\\\\nu>1$. In contrast, varying the hyperparameters of the baseline methods **directly affects the performance of the converged policy**.\\n\\nWe have added **additional experiments to verify this experimentally** by running $\\\\nu=4$ and $\\\\nu=6$ for longer and observe that the performance of the _converged_ policy matches the performance using the $\\\\nu$ schedule.\\nSee Appendix C.6.4 and Figure 12 in the revised manuscript.\\n\\n#### 2. Insensitive to the _type_ of environment\\nBaseline methods are sensitive to **the type of environment**. Since the optimal hyperparameter changes for different environments, the baseline hyperparameters **do not generalize** across different environments.\\n\\nOn the contrary, the fact that we use a single set of hyperparameters shows that our hyperparameters generalize across different environments, and DGPPO is not sensitive to the environment type.\\n\\n---\\n\\n> **Question 2:** Could DGPPO\\u2019s rollout requirements be reduced to improve sample efficiency without compromising safety? \\n\\n**R:** As shown in paragraph \\\"Learning $V^h$ with a stochastic policy\\\" of Section 5.3 in the original submission,\\nwhile it is possible to use _stochastic_ instead of _deterministic_ rollouts, this degrades the cost and safety rate.\\nWe thus choose to use both deterministic rollouts (to learn $V^h$, which is a DGCBF) and stochastic rollouts (to optimize the policy under unknown dynamics) to maximize the cost and safety rate of DGPPO.\", \"note\": \"we have shown in Appendix B.6.2 of the original submission (Appendix C.6.2 of the revised version) that, **even if the baselines are given double the number of samples**, they still do not achieve similar performance as DGPPO.\\n\\nMoreover, from a _computational_ perspective, doubling the sample requirement does not change the wall clock time compared to the baseline methods. This is because the computational bottleneck lies in the _backpropagation step_ and not the _sample collection_ step in our experiments.\\n\\n## References\\n\\n[1] Chow, Yinlam, et al. \\\"A lyapunov-based approach to safe reinforcement learning.\\\" NeurIPS 2018.\"}",
"{\"title\": \"Author Reply (1/4)\", \"comment\": \"We thank the reviewer for finding our considered problem general and challenging and for acknowledging our extensive experiments. We also thank the reviewer for the detailed theoretical concerns from closely reading our manuscript.\\n\\nWe hope that our responses below and our revised manuscript address the concerns raised by the reviewer.\\n\\n## Summary\\n\\nIn brief, we have:\\n\\n1. Explained why the policy in this work is \\\"distributed\\\"\\n2. Clarified the difference between **policy gradients** and **score function gradients** during the policy updates w.r.t. the DCBF condition and performed additional experiments that demonstrate the difference\\n3. Provided proofs for the **safety** and **generalizability** guarantee of DGCBF.\\n4. Clarified the necessity of using deterministic rollouts to learn the constraint-value function $V^h$\\n5. Discussed the advantage of using DGCBF instead of DCBF\\n\\n## Detailed Reply\\n\\n> **Q1:** The policy in this work only takes local observations as input, such that it is decentralized. Why do you refer to it as a distributed policy?\\n\\n**R**: Good question.\\n\\nWe use the definition in this work that a distributed method allows for communication among agents (e.g., [1]), though we note that this term has a different meaning in other communities ([2]).\\n\\nUsing this definition, whether the policy counts as distributed or not depends on whether the observation function $O_i$ uses communication or not.\\nSince DGPPO is agnostic to both cases, we refer to it as a distributed policy.\\n\\n---\\n\\n> **Q2:** The modified constraint (11b) is too strict which cannot be satisfied by a lot of policy classes, such as the Gaussian policy.\\n\\n**R**: Even if the modified constraint (11b) is too strict to be satisfied theoretically, **empirically** this results in a large increase in safety compared to the original more relaxed constraint (10) without significant differences in the cost (Figure 8 in App. B.6.1, Figure 9 in App. C.6.1 of the revised version).\\n\\nMore importantly, this formulation enables the use of cheap gradient projections as in Theorem 3 to improve the sample efficiency, which outperforms both approaches above.\\n\\nWhile we are currently unable to explain this gap, we believe the empirical results are sufficient and leave stronger theoretical explanations of this phenomenon as future work. \\n\\n---\\n\\n>**Q3:** In (12), when the safety condition is violated, the expression for the gradient, which requires the policy-gradient theorem, is not associated with the score function gradient in (41).\\n\\n**R:** Good catch, this is a typo. Thank you for the careful reading!\\n\\nElaborating a bit more, there is a distinction between\\n\\n\\n$$\\n\\\\nabla_\\\\theta \\\\mathbb{E}\\\\_{\\\\mathbf{x} \\\\sim \\\\rho^{\\\\pi_\\\\theta}}[ \\\\tilde{C}\\\\_\\\\theta^{(m)}(\\\\mathbf{x}) ] = \\n\\\\nabla\\\\_\\\\theta \\\\mathbb{E}_\\\\{\\\\mathbf{x}\\\\_0 \\\\sim \\\\rho\\\\_0, \\\\mathbf{u} \\\\sim \\\\pi\\\\_\\\\theta}\\\\left[ \\\\sum\\\\_{k=0}^\\\\infty \\\\max\\\\big(0, \\\\tilde{C}(\\\\mathbf{x}^k, \\\\mathbf{u}^k) \\\\big) \\\\right] \\\\tag{$\\\\star$}\\n$$\\n\\n\\nwhich requires the **policy-gradient theorem** to compute the gradient, and\\n\\n\\n$$\\n\\\\nabla_\\\\theta \\\\mathbb{E}\\\\_{\\\\mathbf{x} \\\\sim \\\\rho}[ \\\\tilde{C}\\\\_\\\\theta^{(m)}(\\\\mathbf{x}) ] \\\\Bigg|\\\\_{\\\\rho = \\\\rho^{\\\\pi_\\\\theta}} = \\\\mathbb{E}\\\\_{\\\\mathbf{x} \\\\sim \\\\rho^{\\\\pi_\\\\theta}}[ \\\\nabla_\\\\theta \\\\tilde{C}\\\\_\\\\theta^{(m)}(\\\\mathbf{x}) ] \\\\tag{$\\\\dagger$}\\n$$\\nwhich only uses the **score function gradient**.\\n\\nIn particular, while our original goal is to satisfy the DCBF condition everywhere, i.e.,\\n$$\\n\\\\tilde{C}_\\\\theta^{(m)}(\\\\mathbf{x}) \\\\leq 0 \\\\quad \\\\forall \\\\mathbf{x},\\n$$\\n($\\\\star$) will also try to minimize the stationary density at states $\\\\mathbf{x}$ where the constraint is violated, which is **not** what we want.\\nHence, we use ($\\\\dagger$) in (12) and in our experiments.\\n\\nWe have included the above explanation in Appendix B.2 in the revised version.\\n\\nTo verify this, **we have also performed additional experiments where we try using ($\\\\star$) instead of ($\\\\dagger$)**. The results show that using ($\\\\star$) behaves overly conservatively and converges much slower in cost. This matches the discussion above, that using ($\\\\star$) additionally tries to avoid _states_ where the DCBF constraint violation is high, which leads to unnecessary conservatism. We have included these new results in Appendix B.2 in the revised version.\"}",
"{\"title\": \"Author Reply (3/4)\", \"comment\": \"> **Q5:** The reason for using discrete graph CBFs should be explained clearly. Note that we can regard the multi-agent system as a large but single agent. Then, you can directly use the discrete CBF given in Theorem 2 to learn safe policies. In this case, the distributed control nature can still be preserved as the learned observation-based policy is end-to-end.\\n\\n**R:** Thanks for the suggestion! The main reason is that a discrete graph CBF (DGCBF) provides an additional structure that **theoretically guarantees generalization to larger numbers of agents**.\\n\\nA DGCBF $\\\\tilde{B}$ can be used to construct a DCBF $B$ (Appendix A.6 of the revised version). Hence, both can guarantee safety.\\nHowever, while $B$ only applies to a specific $N$, **we prove in Appendix A.7 of the revised version that a _single_ DGCBF $\\\\tilde{B}$ that holds for $\\\\bar{N}$ agents can be used to construct a DCBF for _any_ $N > \\\\bar{N}$ agents**.\\nThis provides theoretical evidence for training DGPPO on a small number of agents, then deploying it on a much larger number of agents.\\n\\nTo further support this, we have performed experiments where DGPPO is trained with 8 agents and tested on up to 512 agents.\\nThe results below suggest that DGPPO can maintain high performance even when deployed on more agents.\\n\\n|Number of agents|Safety rate|Normalized cost|\\n|---|---|---|\\n|8|$1.000\\\\pm0.000$|$1.673\\\\pm0.430$|\\n|16|$0.992\\\\pm0.088$|$1.784\\\\pm0.316$|\\n|32|$0.987\\\\pm0.112$|$1.748\\\\pm0.235$|\\n|64|$0.986\\\\pm0.118$|$1.799\\\\pm0.418$|\\n|128|$0.982\\\\pm0.133$|$1.839\\\\pm0.323$|\\n|256|$0.985\\\\pm0.122$|$1.823\\\\pm0.366$|\\n|512|$0.985\\\\pm0.123$|$1.821\\\\pm0.390$|\\n\\nWe have added these additional experimental results in Appendix C.8 of the revised manuscript.\\n\\n---\\n\\n> **Q6 (1/3):** Theorem 4 is hard to understand.\\n\\n**R:** Thanks for the feedback! Theorem 4 suggests that the attention mechanism can be used to construct a DGCBF $\\\\tilde{B}$ such that the DGCBF condition (16) can be satisfied, even during neighborhood changes.\\n\\nTo improve clarity, we have moved Theorem 4 to the Appendix.\\nInstead, we have replaced it with an informal version (Informal Theorem 4 in Section 4.4) in the revised paper that more clearly brings the key ideas across.\\n\\n---\\n\\n> **Q6 (2/3):** What is the relationship between the discrete graph CBF and the discrete CBF? \\n\\n**R:** On one hand, a DGCBF $\\\\tilde{B}$ can be used to construct a DCBF $B$ (Appendix A.6 of the revised version). Hence, both can guarantee safety.\\nOn the other hand, while $B$ only applies to a specific $N$, **we prove in Appendix A.7 of the revised version that a _single_ DGCBF $\\\\tilde{B}$ that holds for $\\\\bar{N}$ agents can be used to construct DCBF for any $N > \\\\bar{N}$**.\\n\\n---\\n\\n> **Q6 (3/3):** Similar to Theorem 1, it is important for the authors to show that the safe set is forward invariant based on the discrete graph CBF.\\n\\n**R:** Thanks for the comment! We have proved **the safe set is forward invariant** using DGCBF in Appendix A.6 in the revised manuscript. Furthermore, we have also shown that DGCBF can be used to guarantee safety for **any number of agents** in Appendix A.7 in the revised manuscript.\"}",
"{\"title\": \"Reply to all\", \"comment\": \"We thank the reviewers for their valuable comments.\\nWe are excited that the reviewers have identified the importance of the problem ($\\\\color{#648FFF}{\\\\textsf{uB5M}}$, $\\\\color{#DC267F}{\\\\textsf{LZax}}$),\\nthe novelty of our technical contributions (**all** reviewers),\\nappreciated our extensive empirical validation ($\\\\color{#E69F00}{\\\\textsf{XeoF}}$, $\\\\color{#DC267F}{\\\\textsf{LZax}}$),\\nand found the paper well-motivated with good presentation ($\\\\color{#648FFF}{\\\\textsf{uB5M}}$, $\\\\color{#E69F00}{\\\\textsf{XeoF}}$).\\nWe believe that DGPPO takes a significant step towards greatly improving the safety of multi-agent systems with **unknown, discrete-time dynamics _without_ an available performant reference controller** by applying distributed graph CBFs and multi-agent reinforcement learning.\\n\\n---\\n\\nAs _all_ reviewers have recognized our technical novelty, the primary concerns stem from the scalability and generalizability (all reviewers), \\nhyperparameter sensitivity ($\\\\color{#E69F00}{\\\\textsf{XeoF}}$),\\nand clarity of theoretical results ($\\\\color{#DC267F}{\\\\textsf{LZax}}$).\\n\\nIn our updated revision, we provide major improvements by clarifying _all_ raised questions.\\nWe provide a brief summary of notable changes below. All references to sections refer to the **revised** version.\\n\\n## 1. Generalizability of DGPPO\\n\\nWe have **added an additional theorem** in Appendix A.7 which theoretically proves that the discrete graph control barrier function (DGCBF) can be generalized to more agents than initially trained for.\\n\\nNext, we **perform an additional experiment where we train DGPPO with 8 agents and deploy the learned policy on up to 512 agents** (Appendix C.8). The results, shown below, demonstrate that DGPPO can maintain a high safety rate and low costs even with larger numbers of agents.\\n\\n|Number of agents|Safety rate|Normalized cost|\\n|---|---|---|\\n|8|$1.000\\\\pm0.000$|$1.673\\\\pm0.430$|\\n|16|$0.992\\\\pm0.088$|$1.784\\\\pm0.316$|\\n|32|$0.987\\\\pm0.112$|$1.748\\\\pm0.235$|\\n|64|$0.986\\\\pm0.118$|$1.799\\\\pm0.418$|\\n|128|$0.982\\\\pm0.133$|$1.839\\\\pm0.323$|\\n|256|$0.985\\\\pm0.122$|$1.823\\\\pm0.366$|\\n|512|$0.985\\\\pm0.123$|$1.821\\\\pm0.390$|\\n\\n## 2. Hyperparameter Sensitivity\\n\\nFigure 6(b) shows that using larger values of $\\\\nu$ results in a slower reduction of the cumulative cost.\\nWe have **performed additional experiments that confirm that only the rate and _not_ the final converged policy is affected by $\\\\nu$** (Appendix C.6.4).\\nThis is a _significant improvement_ compared to the baseline methods, where both the rate **and** the performance of the converged policy are greatly affected by the choice of hyperparameter.\\n\\n## 3. More Clarification on Theoretical Analysis\\n\\nWe have significantly clarified our theoretical analysis. In short, we have:\\n\\n1. Included a more detailed derivation of our proposed policy loss (Appendix B.1)\\n1. Clarified the difference between **policy gradients** and **score function gradients** during the policy updates w.r.t. the DCBF condition and performed additional experiments that demonstrate the difference (Appendix B.2)\\n2. Provided proofs for the **safety** (Appendix A.6) and **generalizability** (Appendix A.7) guarantee of DGCBF.\\n\\n---\\n\\nWe hope the new presentation better presents the contributions of our method in improving the safety and task-performance of multi-agent systems by generalizing distributed CBFs to more general settings.\\n\\nWe have tried our best to resolve all the questions raised in the individual responses below.\\nIf the reviewers have any additional questions/comments/concerns, please let us know.\\nWe appreciate the reviewer's precious time in providing their valuable feedback.\"}",
"{\"summary\": \"The proposed DGPPO framework addresses challenges in multi-agent systems (MAS) by learning both a discrete graph control barrier function (DGCBF) and a high-performance safe policy under unknown discrete-time dynamics, changing neighborhoods, and input constraints. DGPPO combines reinforcement learning and DGCBF, achieving high task performance and safety across varied environments without needing a pre-existing nominal policy or multiple hyperparameter sets, consistently outperforming other methods in both metrics of safety rate vs cost for various simulations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed method (DGPPO) is an elegant way to solve the discrete-time distributed MASOCP (multi-agent safe optimal control problem) with unknown environment dynamics. This assumption was not present in previous work which had to differentiate through the given transition functions.\", \"The theorems introduced provide a solid foundation for the applicability of DGPPO in the discrete-time setting.\"], \"weaknesses\": [\"Scalability appears limited (only up to 7 agents) compared to the continuous time setting of GCBF+ [1] (most likely due to the present of unknown environment dynamics and the noise introduced through the sample score function gradient).\", \"The stability of DGPPO compared to the baselines does not seem appropriately explained. Is it a purely empirical observation or is there some theoretical justification available?\", \"In the given setting, why is the assumption of unknown dynamics interesting? To me, the environments considered are purely the settings of [1] without using the environment dynamics directly (even though they are available). Would it not be a better idea to consider an environment where the dynamics are not as simple as the ones in [1] or some complex unknown function (for e.g., common Mujoco robots)?\"], \"references\": \"[1] GCBF+: A Neural Graph Control Barrier Function Framework for Distributed Safe Multi-Agent Control, Zhang et al, T-RO, 2024\", \"questions\": \"1. Why is the dependency of the algorithm on a nominal policy a bad idea in the given settings? Since it appears easy enough to construct one (say a PID controller like in [1]) for the environments given, is this the right direction?\\n2. What is the difference between the training and inference situations in terms of the number of agents? Does the algorithm need to be retrained for every new number of agents unlike in [1] where the algorithm was trained on 8 agents and deployed on up to 1024 agents (albeit being purely concerned with single goal reaching while avoiding collisions)?\\n3. With regards to the sample efficiency and computation requirements, how is DGPPO w.r.t. the baselines (I noticed the training time was listed as 12 hours on the reference specifications)? On a related note, how is the benefit of a constant set of hyperparameters demonstrated? Can we confidently say the hyperparameter search for the baselines takes significantly longer (in wall clock time on a comparable machine)?\\n4.What are the restrictions on the definition of the avoid set $\\\\mathcal{A}_i$ and the assumptions on the function $h_i^{(m)}$? Do the avoid sets primarily represent distance to $y^k$ greater than some safe radius?\\n5. The LiDAR part of the observation appears less clear. From the appendix (Sec B.2.1) is it right to say that only the LiDAR environments use the 32 equally spaced ray capturing relative positions? How are the obstacles in the VMAS environments represented to the agent?\\n6. The experiments with scalability to multiple agents (Fig. 5) appear quite close to the baselines. Is there a better comparison available?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your answers that clarify most of the concerns I highlighted in the review.\\nBased on this, I have decided to raise my score.\"}"
]
} |
1W6oINj8ne | BRSSD10k : A SEGMENTATION DATASET \\OF BANGLADESHI ROAD SCENARIO | [
"Mirza Nihal Baig",
"Mahdi Murshed Patwary",
"Husne Ara Chowdhury",
"Md. Shahidur Rahman"
] | In this paper, we present a novel Bangladeshi Road Scenario Segmentation Dataset designed to advance autonomous driving technologies under the challenging and diverse road conditions of Bangladesh. This comprehensive instance segmentation dataset comprised 10,082 high-resolution images captured across nine major cities, including Dhaka, Sylhet, Chittagong, and Rajshahi, addressing the critical need for region-specific computer vision data in developing countries. Unlike existing autonomous driving datasets that primarily focus on western road conditions, BRSSD10k encompasses a wide range of environments unique to Bangladesh, including unstructured urban areas, hilly terrains, village roads, and densely populated city centers. The dataset features instance segmentation annotations with classes specifically tailored to reflect the distinctive elements of Bangladeshi roads, such as rickshaws, CNGs (auto-rickshaws), informal roadside stalls, and various nonstandard vehicles. To demonstrate its utility as a benchmarking tool for autonomous driving systems, we present comparative results from several state-of-the-art instance segmentation models tested on this dataset, achieving an mAP of 0.441. This evaluation not only showcases the dataset's effectiveness in assessing model performance but also underscores the need for adaptive algorithms capable of handling diverse and unpredictable urban environments in the context of autonomous navigation. | [
"Instance Segmentation",
"Computer Vision",
"Dataset",
"Autonomous Driving",
"Bangadeshi Road"
] | Reject | https://openreview.net/pdf?id=1W6oINj8ne | https://openreview.net/forum?id=1W6oINj8ne | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"mZ4NyiPWT4",
"knYS8db9Z0",
"kJKWp9CLED",
"TvOzjVJunc",
"TtSPqoQwXD",
"2sg5Czapep"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1734924394774,
1729920325204,
1730785004346,
1730347158887,
1737523963900,
1730499190643
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9147/Area_Chair_3fKA"
],
[
"ICLR.cc/2025/Conference/Submission9147/Reviewer_zfNC"
],
[
"ICLR.cc/2025/Conference/Submission9147/Reviewer_pH3o"
],
[
"ICLR.cc/2025/Conference/Submission9147/Reviewer_ypst"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9147/Reviewer_bfsm"
]
],
"structured_content_str": [
"{\"metareview\": \"The submission presents a dataset for instance segmentation on road scenes in a novel geography. While the problem is an important one, the dataset can improve in several ways. The mask qualities are quite coarse and should match state-of-the-art datasets for road scenes. The class balance might not be appropriate for semantic segmentation given under-representation for several classes. Besides instance segmentation, there could be support for panoptic or universal segmentation too, which are important in practical applications.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers recommend rejection based on the above concerns and no author rebuttal is submitted. It is suggested for the authors to improve the submission based on the numerous review suggestions. The AC agrees with the reviewer consensus that the paper may not be accepted at ICLR.\"}",
"{\"summary\": \"This paper proposes a road segmentation dataset for autonomous driving purpose. It focuses on the scenarios in Bangladeshi and make specific adaptions in class definition and labeling. Validation experiments are conducted.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"It clearly analyzes the practical scenario characteristics in Bangladeshi. The class definition and labeling process fully fits the scenarios. This dataset acts as a valuable resource for developing autonomous driving models in this country. It may also contribute to general vision perception tasks.\", \"weaknesses\": \"1.The authors just use one paragraph to summarize the related datasets without any detailed comparison. I do not think the authors really understand the development of this field as there are only eight references.\\n2. The scenarios in the dataset are more likely to be corner cases comparing with the mainstream segmentation datasets. Its universality cannot be verified.\\n3.The structure of the manuscript is poorly organized. The logic between sections 3-6 are chaotic.\\n4.It is really confusing that the authors validate the segmentation dataset with YOLO.\\n5. It is really funny that the GT maps in Figure 3 are wrong.\", \"questions\": \"The authors strongly emphasize that the main motivation of this work is that there lack segmentation datasets in Bangladeshi. It should be clarified that the contribution of a dataset does not lay in its location, but the data quality, diversity, and scale.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces BRSSD10k, a segmentation dataset specifically tailored to the unique and diverse road scenarios in Bangladesh. This dataset consists of 10,082 high-resolution images from nine cities across the country, with detailed annotations covering 34 classes that reflect the region's distinct transportation environment. Classes include locally prevalent elements such as rickshaws, CNGs (auto-rickshaws), and informal roadside stalls, which are critical for developing robust autonomous driving systems for Bangladeshi roads.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The authors have compiled a comprehensive dataset with over 10,000 high-resolution images and detailed instance segmentation annotations, covering a diverse range of geographic regions within Bangladesh.\", \"A rigorous two-stage validation process for annotations ensures high-quality data, which is essential for developing robust and accurate computer vision models.\", \"Comparative evaluation with multiple state-of-the-art models (e.g., YOLOv5, YOLOv8, YOLOv9) showcases the benchmark's effectiveness and sets a baseline for future research on BRSSD10k.\", \"The inclusion of region-specific object classes (e.g., rickshaws, CNGs, informal stalls) provides a unique contribution, enabling autonomous systems to better understand and navigate environments outside of structured Western road layouts.\"], \"weaknesses\": [\"The dataset only covers limited regions in one country, which is not enough to evaluate the generalization ability of segmentation.\", \"The quality of the segmentation masks is not satisfactory.\", \"Certain critical classes, such as traffic lights, construction vehicles, and road blockers, are underrepresented in the dataset.\", \"The dataset currently lacks nighttime and adverse weather imagery (e.g., rain or fog), which are essential for real-world segmentation.\", \"The paper only evaluates three versions of the YOLO model, which may limit insights into how BRSSD10k performs across different model architectures.\", \"There is no analysis on how models trained on BRSSD10k generalize to other datasets or vice versa.\"], \"questions\": [\"The authors need to consider including more baselines for evaluation.\"], \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"Human faces appear on the road. They are not removed and blurred.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a new dataset focused on object detection and segmentation,\\ntailored to the specific driving conditions in Bangladesh in terms of its\\nappearance and taxonomy.\\n\\nThe dataset encompasses ~10k camera images collected in Bangladesh using a cell\\nphone. The images are sourced from video chunks originating from diverse\\nregions, and contiguous footage is sampled to 1 Hz. The frames are annotated\\nwith object bounding boxes and segmentation masks.\\n\\nThe paper motivates the dataset as helpful in developing computer vision\\nalgorithms specific to Bangladeshi driving scenes and performs a brief\\ncomparative analysis of different YOLO-based models trained on this dataset. The\\npaper helpfuly provides metadata like class and geographic distribution\\nhistograms as well as many qualitative examples in order to help the reader get\\na sense of the dataset.\\n\\nWhile it is definitely important to promote datasets which cover a diverse range\\nof environments, I think the quantitative argument made in this paper to\\nmotivate the dataset could be strengthened. For example, the argument could be\\nimproved by showing experimental results which demonstrate the limitations of\\nother dataset on data collected in Bangladesh.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"- [S1] Diverse object classes from multiple cities in Bangladesh, reflecting a\\n unique label distribution that is materially different from other established\\n datasets such as Waymo Open and nuScenes.\\n- [S2] The authors also present the results of a few detection baselines based\\n on YOLO, trained and evaluated on this dataset's corresponding splits.\", \"weaknesses\": \"- [W1] The related section could be made a bit more comprehensive. For example,\\n it would be interesting to also discuss other datasets focusing on non-Western\\n streets, such as the dataset introduced in [@traphic]. Even though it's\\n mentioned later in the paper, the BadODD dataset should also be covered in the\\n related work section and in the relevant tables.\\n- [W2] While it is helpful to benchmark a few existing models on the proposed\\n dataset, it would be beneficial to also compare these numbers with those from\\n models trained on a mainstream dataset such as CityScapes or Mapillary Vistas.\\n If models trained on a dataset like Cityscapes or Mapillary Vistas fail to\\n perform well on this dataset, that would make for a good quantitative argument\\n for why this dataset will help the community.\\n - As a side-note, even if the taxonomy another dataset won't match the one in\\n BRSSD10k perfectly, this gap could be alleviated by the use of an\\n off-the-shelf VLM, which have been shown to be very good at tasks like open\\n set object detection---see, for example, Grounding DINO [@liu2024grounding].\\n- Minor Suggestions\\n - Sections 7.3, 7.4, and 7.5 can be shortened and replaced with more\\n comparisons, or additional details about the dataset or its software\\n development kit. Readers can refer to the corresponding references if they\\n are curious about the specific loss functions used to train these models.\\n - The citation markers seem to be missing parentheses around them. For\\n example, a sentence like \\\"... complex environments He et al. (2017)\\\" should\\n be formatted like \\\"... complex environments (He et al., 2017).\\\"\\n- References:\\n - [@traphic]: Chandra, Rohan, et al. \\\"Traphic: Trajectory prediction in dense\\n and heterogeneous traffic using weighted interactions.\\\" CVPR. 2019.\\n - [@liu2024grounding]: Liu, Shilong, et al. \\\"Grounding dino: Marrying dino\\n with grounded pre-training for open-set object detection.\\\" arXiv preprint\", \"arxiv\": \"2303.05499 (2023).\", \"questions\": \"- [Q1] How is the dataset split into train/val/test? Do you perform geographic\\n splitting, or is the splitting purely at random?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The manuscripts presents a novel road-driving dataset for instance segmentation. The dataset includes more than 10000 high resolution images acquired along 9 cities in Bangladesh. The dataset taxonomy includes 34 classes that reflect typical needs of autonomous driving and regional characteristics. The taxonomy is mostly well-balanced (Figure 2). There are around 6000 training, 2000 validation and 2000 test images. The presented experiments involve object detection with stock models and report mAP50 performance on validation and test datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The dataset will likely prove as a valuable contribution to the field.\", \"Many stuff classes are annotated (sky, road, wall, fence).\", \"Little effort is required to extend the dataset for panoptic segmentation.\"], \"weaknesses\": [\"it is hard to recommend n+1-th road-driving dataset for publication at a major conference\", \"dataset focuses on typical images, for which our models are known to work well\", \"the baseline models address only object detection (some universal segmentation model such as MaskFormer would be a better choice)\"], \"questions\": \"It would make sense to extend the dataset with full panoptic labels.\", \"it_would_make_sense_to_cite_and_discuss_related_road_driving_datasets\": \"ACDC, WildDash, FishyScapes, SegmentMeIfYouCan.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
1VwWi6zbxs | Mastering Task Arithmetic: $\tau$Jp as a Key Indicator for Weight Disentanglement | [
"Kotaro Yoshida",
"Yuji Naraki",
"Takafumi Horie",
"Ryosuke Yamaki",
"Ryotaro Shimizu",
"Yuki Saito",
"Julian McAuley",
"Hiroki Naganuma"
] | Model-editing techniques using task arithmetic have rapidly gained attention.
Through task arithmetic, simply through arithmetic operations on the weights of pre-trained and fine-tuned models create desired models, such as multi-task models, models in which specific tasks are unsolvable, or domain-transferred models.
However, task arithmetic faces challenges, such as poor reproducibility and the high cost associated with adjusting coefficients in the arithmetic operations on model parameters, which have limited its practical success. In this paper, we present three key contributions in the context of task addition and task negation within task arithmetic. First, we propose a new metric called $\tau$Jp which is based on the product of the task vector ($\tau$) and the Jacobian of the pre-trained model with respect to its weights. We show that $\tau$Jp has a causal relationship with the interference that occurs from arithmetic operations. Second, we show that introducing regularization to minimize $\tau$Jp significantly mitigates interference between task inference, which leads to the elimination of coefficient tuning and improved accuracy on each task.
Third, in the context of incremental learning, we demonstrate that our $\tau$Jp regularization achieves more robust performance in environments where access to future tasks is unavailable, thus validating the scalability of the approach.
Finally, we demonstrate that the $\tau$Jp regularizer further reinforces the performance of task arithmetic by leveraging publicly available fine-tuned models, offering practical benefits for real-world applications.
Our code is available at https://github.com/katoro8989/tau-Jp_Task_Arithmetic | [
"task arithmetic",
"model editing",
"task vector"
] | Accept (Poster) | https://openreview.net/pdf?id=1VwWi6zbxs | https://openreview.net/forum?id=1VwWi6zbxs | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xTQzotrg2a",
"wxhQjCl1QV",
"wcw1cPPKBi",
"uBf7FOCik7",
"snLHbWfbTM",
"p8HPyKOZYe",
"oAGM0MDjks",
"mWaa5L4qD3",
"lm7zVYk2SO",
"hqdDJQZtqq",
"hCeR5ntOzY",
"feXOyoqa2i",
"fc0LwO1Qz5",
"esnhDlEwLj",
"dzIpKRyn7f",
"ZcK0PJasiE",
"Z5prXzKC0S",
"YkTXQLOdp5",
"YcrxyCq7Om",
"X6aRvSdXPz",
"X6N5EeRIXg",
"WCrYYRvFVn",
"VB9CyNveae",
"TtORIKUHKo",
"R9h4B2JLpU",
"Q60iLzPsQI",
"Q1qdw0pQDg",
"NbBU6a93Q6",
"NN649648D3",
"Mx1SShTpGm",
"MLe2ZRSbys",
"JJBaLRrPD3",
"HyIMhBTXVO",
"G99PwHlG89",
"FaYqadPPyr",
"5uT5xbVlRE",
"5nI6dZQTnv",
"4vjRHofG9p",
"4BVv6SejVf",
"3yWpmDJTSv",
"3BRH8lRgeH",
"2cGXBzw2ei",
"28pmmoPuMZ",
"18c6V1ssKf",
"0SuYXopDIx"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1732612150210,
1732278213290,
1733142383583,
1733037895945,
1732278443989,
1732337731543,
1737524235571,
1732277637875,
1732268408662,
1732692380256,
1732691607351,
1732692414044,
1730307802728,
1732268164451,
1732890259227,
1733044509356,
1732274341586,
1732266768433,
1732267352916,
1732277356475,
1732890129227,
1732691859398,
1732594354625,
1732276996784,
1730671313878,
1732280808434,
1734440961917,
1732411211005,
1732760511436,
1732541689429,
1732275681918,
1732267653387,
1730381300804,
1732277829590,
1732267145580,
1732692753037,
1732843655907,
1732890776504,
1732586200087,
1733147686126,
1732890562533,
1732278625291,
1732276204303,
1732267956388,
1729265357401
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_hopm"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_onTx"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_n6ZH"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_rGLy"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_onTx"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_onTx"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_hopm"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_rGLy"
],
[
"ICLR.cc/2025/Conference/Submission13117/Area_Chair_R1CM"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_onTx"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_onTx"
],
[
"ICLR.cc/2025/Conference/Submission13117/Area_Chair_R1CM"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_onTx"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13117/Reviewer_n6ZH"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for answering my questions. My main concern was about comparing to baselines with similar data requirements (such as AdaMerging) which has been addressed, so I will raise my score accordingly.\"}",
"{\"title\": \"Response to reviewer n6ZH\", \"comment\": \"### Regarding Weakness 2\\n\\n>While the empirical results are compelling, the paper lacks a thorough theoretical explanation for why the proposed regularization leads to better performance compared to other methods, such as those discussed in Ortiz-Jimenez et al. (2023). I am confused about why a simple and soft regularization results in such improvement compared to [1]. A deeper theoretical analysis could strengthen the paper's contributions.\\n\\nWe appreciate your feedback on the effectiveness of the proposed regularization.\\n\\nOur regularization term is defined as $||(\\\\theta - \\\\theta_0)^T \\\\nabla_\\\\theta f(\\\\theta_0, x_{\\\\text{other}})||^2 $, designed to encourage the task vector ($\\\\theta - \\\\theta_0$) to be orthogonal to the output gradient of other tasks $\\\\nabla_\\\\theta f(\\\\theta_0, x_{\\\\text{other}})$ in the pre-trained model (see Section 4.1 for details). Similar approaches, which aim to orthogonalize model weights to a specific vector by incorporating the L2 norm of their inner product as a regularization term, have been explored in previous studies (e.g., [2]) and have demonstrated their effectiveness. Our method is based on a similar idea, and our experimental results confirm that the task vector is effectively guided to be orthogonal to $\\\\nabla_\\\\theta f(\\\\theta_0, x_{\\\\text{other}})$. This effect, as a result, keeps $ \\\\tau_{\\\\text{Jp}}$ small, and the relationship between this reduction, improved weight disentanglement, and enhanced task arithmetic performance is elaborated in Section 3.\\n\\n[2] Wang, Xiao, et al. \\\"Orthogonal subspace learning for language model continual learning.\\\" arXiv preprint arXiv:2310.14152 (2023).\"}",
"{\"title\": \"Further discussion\", \"comment\": \"Thanks for the detailed reply.\\n\\nRegarding the computational cost, I'm also a user of AdaMerging which I suspect has a comparable memory consumption with other methods. Could you kindly remind me where are the authors\\u2019 reported results?\\n\\nRegarding the bottleneck, it's natural to suspect that in a larger model (even larger than ViT-L-14), the proposed method could not beat baselines, since it achieved a 99% norm acc at the ViT-L-14 level. Thus, the additional data requirement and computational cost could not be neglected.\"}",
"{\"comment\": \"Thank you for your response and for providing further clarification.\\n\\nI appreciate the effort and substantial work the authors have put into this paper. However, after carefully reviewing the manuscript, I still find the contributions incremental compared to [1]. While the paper adds some theoretical and experimental insights, it appears to primarily build upon existing ideas without sufficiently distinguishing itself.\\n\\nThe proposed regularization term is an interesting addition, but its novelty seems limited given the foundational similarity to [1], which already highlights the importance of model linearization and task-specific kernel localization. Despite the authors\\u2019 theoretical expansion and experimental validation, I remain unconvinced that this paper represents a significant advancement over prior work.\\n\\nThat said, I acknowledge the effort involved in conducting this research and the methodological rigor in its presentation. After consideration, I maintain my borderline reject decision, as I believe further refinement is needed to emphasize the originality and impact of this work.\"}",
"{\"title\": \"Response to reviewer n6ZH\", \"comment\": \"### Regarding Weakness 3\\n\\n>The authors briefly mention tuning the regularization strength but do not provide sufficient details on how this hyperparameter was selected. The sensitive analysis of this hyperparameter is also necessary for the paper.\\n\\nWe thank you for your comments regarding the hyperparameter $\\\\lambda$.\\n\\nThe strength of the regularization term, $\\\\lambda$, was tuned through a grid search over [1e-3, 1e-2, 1e-1], using validation accuracy as the evaluation metric. Due to limited computational resources, we reused the $\\\\lambda$ value obtained from a specific task (Image: Cars, NLP: CoLA, Civil Comments) across all experiments. While further analysis of the sensitivity to $\\\\lambda$ is necessary, we empirically confirmed that using a unified $\\\\lambda$ across all experiments still yields the benefits of the proposed regularization, suggesting that $\\\\lambda$ is not overly sensitive.\"}",
"{\"title\": \"Response to reviewer rGLy\", \"comment\": \"Thank you very much for raising the score and for your thoughtful review. We believe we have addressed all of your concerns, but please let us know if you have any further questions or additional concerns.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to reviewer rGLy\", \"comment\": \"### Regarding Questions\\n\\n>1.The related work section could be improved by explicitly connecting prior studies to this paper's contributions, emphasizing how the proposed method addresses existing limitations.\\n\\n>2.Consider moving the related work section after the methods section, especially since the current structure delays the introduction of the proposed method until page 5. This change would allow readers to quickly understand the proposed approach before diving into comparisons, enhancing readability and engagement.\\n\\nThank you for your valuable suggestions. \\n\\nFirst, we have improved the Related Work section to clearly articulate the connection between existing methods and our proposed approach. Specifically, we highlighted the limitations of existing methods and demonstrated how our work addresses these issues. Additionally, we included discussions on related studies concerning the idea of $\\\\tau_{\\\\text{Jp}}$ regularization, providing a more comprehensive context for our contributions.\\n\\nIn addition, we have relocated the Related Work section to follow the Method section for improved logical flow and clarity.\"}",
"{\"title\": \"Response to reviewer onTx\", \"comment\": \"### Regarding Weakness 4\\n>The derivation of Equation 7 from the weight disentanglement definition is non-trivial and should be explained more clearly.\\n\\nWe greatly appreciate your helpful feedback.\\n\\nFirst, regarding the definition of weight disentanglement, we kindly refer you to the beginning of the second paragraph in Section 2.2 for a detailed explanation.\\n\\nNext, we have added further clarification on how Equation 7 is derived from this definition of weight disentanglement (highlighted in red in Section 3.1). Specifically, satisfying weight disentanglement means that:\\n\\n$f\\\\left(x_A ; \\\\theta_0+\\\\alpha_A \\\\tau_A+\\\\alpha_B \\\\tau_B\\\\right)= f\\\\left(x_A ; \\\\theta_0+\\\\alpha_A \\\\tau_A\\\\right)$\\n\\nand\\n\\n$f\\\\left(x_B ; \\\\theta_0+\\\\alpha_A \\\\tau_A+\\\\alpha_B \\\\tau_B\\\\right) = f\\\\left(x_B ; \\\\theta_0+\\\\alpha_B \\\\tau_B\\\\right)$\\n\\nfor any $\\\\alpha_A$ and $\\\\alpha_B$. Based on Equation 6, this naturally leads to the derivation of Equation 7.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"### **Regarding Generalization in the Context of MTL**\\n>Meanwhile, I'd like to consider this method as an MTL method, thus some results regarding generalization would strengthen this work, e.g. generalizing to an entirely unseen test set (table 3 from Adamerging). I encourage the authors to include this experiment if time permits.\\n\\nAs suggested, we evaluated the generalization performance of our method on unseen tasks within the context of MTL.\\n\\nFirst, we would like to emphasize that the primary objective of our method lies in **\\u201cweight disentanglement\\u201d**, as outlined in Equation 1 of the paper. Specifically, weight disentanglement aims to suppress interference between merged task vectors and minimize the negative impact on the pre-trained model\\u2019s original performance on tasks beyond the target. Thus, generalization to unseen tasks falls outside the direct scope of our method\\u2019s objectives. However, we also argue that weight disentanglement can be beneficial for generalization to unseen tasks when considered from the perspective of preserving the pre-trained model\\u2019s performance. We explain this further below.\", \"we_conducted_the_experiments_under_the_following_setup\": \"The training tasks consisted of six tasks\\u2014[Cars, GTSRB, DTD, EuroSAT, MNIST, SUN397]\\u2014while the unseen tasks were set as RESISC45 and SVHN, following the experimental setup in [2].\", \"among_the_unseen_tasks\": \"\\u2022\\t**SVHN**: Similar to MNIST, SVHN is a 10-class digit classification task. If the knowledge learned from MNIST in the training tasks can be effectively leveraged, performance improvements on SVHN can be expected. Therefore, MTL\\u2019s generalization capabilities are likely more relevant than weight disentanglement for this task.\\n\\n\\u2022\\t**RESISC45**: Like EuroSAT, RESISC45 is a classification task for aerial imagery. However, RESISC45 includes 35 additional classes not covered by EuroSAT\\u2019s 10 classes. As such, the knowledge obtained from EuroSAT alone may not suffice for many instances in RESISC45. In this case, maintaining the pre-trained model\\u2019s knowledge through weight disentanglement is expected to result in higher performance.\\n\\nThe experimental results, all measured in accuracy, are as follows:\\n\\n| Method | Training Tasks Avg. | SVHN | RESISC45 |\\n| :--- | :---: | :---: | :---: |\\n| Pre-trained | 48.8 | 31.6 | 60.2 |\\n| Non-lin. FT | 73.4 | 50.2 | 52.2 |\\n| Linear FT | 77.4 | 38.7 | 46.6 |\\n| AdaMerging | 80.3 | **60.9** | 50.2 |\\n| MTL | **86.3** | 60.8 | 42.9 |\\n| Ours | 85.4 | 42.4 | **54.3** |\\n\\nAs noted above, methods like MTL and AdaMerging, which do not prioritize weight disentanglement, demonstrate high generalization performance on SVHN. However, their performance on RESISC45 is significantly degraded, likely due to the negative impact on the pre-trained model\\u2019s knowledge. In contrast, our method, which focuses on weight disentanglement, maintains the pre-trained model\\u2019s performance on RESISC45 while showing lower generalization performance on SVHN. To reiterate, the reduced generalization performance on SVHN is a consequence of weight disentanglement and aligns with its definition in our approach.\\n\\nThis discussion has been added to Appendix E.5 for further reference.\\n\\n[2] Yang et al. AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR 2024.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"Thank you for your detailed response.\\n\\n### **Regarding Memory Consumption**\\n> could you please provide the computational cost of both efficient and strict reg so that I could have a clear view? I double-checked the code online and have the following question: in the penalty iter, the newly introduced data batch and additional jvp product both increase the memory consumption, I suspect nearly double memory compared to Linear FT. Besides, the authors deployed four V100 GPUs which have 64GB VRAM in total, why do the authors leave most memory idle? Please correct me if I'm wrong.\\n\\nFirst, below we present a comparison of runtime and memory consumption on ViT-B-32, including the strict regularization.\\nThe memory consumption values we previously reported represent the peak memory usage on a single device among the four V100 GPUs used in our setup. We have reported the highest memory usage among the four devices, as also indicated in the \\u201cAllocated / Device (GB)\\u201d column of the table below.\\n\\n| | Abs. | Norm. | Sec. / Iter. | Allocated / Device (GB) |\\n| :--- | :---: | :---: | :---: | :---: |\\n| No reg. (Linear FT) | 74.3 | 85.0 | 0.361 | 6.18 |\\n| Efficient reg. | 84.5 | 97.6 | 0.374 | 6.38 |\\n| Strict reg. | 86.4 | 99.3 | 2.027 | 8.28 |\\n\\n\\nAs you correctly pointed out, the computation of the regularization term requires additional data batches and JvP calculations. However, as mentioned previously, the batch size used for calculating the regularization term is reduced to 1/8 of the original batch size used for loss computation in image tasks. This reduction ensures that the additional data batch and JvP computations do not dominate memory consumption. Consequently, the increase in memory usage is effectively minimized.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"We believe the above discussion addresses all of your concerns. We hope this will contribute to an updated evaluation of your score.\"}",
"{\"summary\": \"The paper introduces a novel metric, $\\\\tau \\\\text{Jp}$ ($\\\\tau$-Jacobian product), to improve understanding of weight disentanglement in task arithmetic. It demonstrates that \\u03c4Jp inversely correlates with normalized accuracy, suggesting it as an indicator for weight disentanglement. A regularization technique is proposed to minimize \\u03c4Jp during fine-tuning, effectively reducing the need for coefficient adjustments in task addition and negation. It also proves valuable in incremental learning scenarios where future tasks are unknown.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.The paper addresses an important and timely topic: in an era where foundation models are prevalent, better understanding weight disentanglement is particularly valuable for enhancing the practical applicability of these models.\\n\\n2.The proposed metric offers a deeper understanding of weight disentanglement, and the regularization method effectively reduces task interference, minimizing the need for coefficient adjustments.\\n\\n3.The success of the proposed method in incremental learning scenarios aligns well with real-world applications, demonstrating its scalability and practical relevance when future tasks are unknown.\", \"weaknesses\": \"1.While the paper introduces the $\\\\tau \\\\text{Jp}$ metric and explains its relationship with weight disentanglement, the theoretical justification for why $\\\\tau \\\\text{Jp}$ regularization effectively reduces task interference could be further elaborated.\\n\\n2.The proposed regularization method lacks a comparison with other existing regularization techniques, which makes it difficult to fully assess its relative strengths and weaknesses. \\n\\n3.The paper mentions task addition, task negation, and task analogies in the introduction and background sections as key operations in task arithmetic, but there are no experiments evaluating task analogies. This inconsistency weakens the completeness of the experimental validation.\", \"questions\": \"Suggestions\\uff1a\\n\\n1.The related work section could be improved by explicitly connecting prior studies to this paper's contributions, emphasizing how the proposed method addresses existing limitations. \\n2.Consider moving the related work section after the methods section, especially since the current structure delays the introduction of the proposed method until page 5. This change would allow readers to quickly understand the proposed approach before diving into comparisons, enhancing readability and engagement.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer onTx\", \"comment\": \"### Regarding Weakness 3 and Question 3\\n>Experiments are limited to image classification tasks. Evaluation on other domains like language tasks would strengthen the claims of generality.\\n\\n>How well does the approach generalize to other domains beyond image classification?\\n\\nThank you for your valuable feedback. We have conducted additional experiments in the NLP domain, with the results summarized in Tables 3, 4, 8, and 9.\\n\\nFirst, we performed task addition experiments (Table 3 and 8) on four tasks from the GLUE benchmark using the T5 model. The selected tasks followed the setup in [4]. Incorporating regularization consistently improved both absolute and normalized accuracy. Notably, for the SST-2 task, adding tasks without regularization resulted in significant performance degradation (normalized accuracy: 61.3). This phenomenon suggests potential interference between task vectors, particularly from CoLA, another single-sentence task. Our proposed regularization substantially mitigated this issue, improving the normalized accuracy to 98.5.\\n\\nAdditionally, we conducted task negation experiments (Table 4 and 9) using GPT-2 to achieve less toxic text generation. In this setup, we performed causal language modeling on toxic texts (Civil Comments) and subtracted the resulting task vector from a pre-trained model. Our method achieved the most effective toxicity reduction while retaining the model\\u2019s original language capabilities, as measured by perplexity on WikiText-103. In contrast, other methods exhibited a trade-off between toxicity reduction and language capability retention due to interference from the task vector. Our approach significantly alleviates this trade-off (See Tables 9).\\n\\n[4] Ilharco, Gabriel, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. \\u201cEditing models with task arithmetic.\\u201d arXiv preprint arXiv:2212.04089 (2022).\"}",
"{\"title\": \"Further discussion 1\", \"comment\": \"### **The need for additional data during training**\\n\\nWe acknowledge that, unlike other task arithmetic methods, our approach requires unlabeled data from other tasks during training, which could be perceived as a limitation.\\n\\nHowever, we argue that such a setting is often practical. Specifically, we view our approach as a natural extension of the Unsupervised Domain Adaptation (UDA) framework. In UDA, the input data distribution x from other domains is leveraged to improve in-domain performance, which aligns with the assumptions of our method. Through this, we address the issue of weight disentanglement\\u2014a problem that conventional pre-merging methods fail to solve\\u2014both theoretically and empirically.\"}",
"{\"title\": \"Regarding the novelty of our work\", \"comment\": \"We appreciate the time you have taken to review our paper.\\n\\nThank you for pointing out your concerns regarding the novelty of our work.\\nTo address your feedback, we outline below the differences between our work and [1], as well as the key novelties:\\n\\n### **1. A methodologically extensible interpretation of Weight Disentanglement (Sec. 3)**\\n\\nFirst, we clarified the conditions required for weight disentanglement through a theoretically extensible discussion. As mentioned previously, [1] associates task-specific kernel localization with weight disentanglement; however, the causal considerations and practical means of achieving task-specific kernel localization remained unresolved. This problem had remained unexplored, but we theoretically demonstrated that the causal factor is $\\\\tau \\\\text{Jp}$, and we also provided experimental evidence to substantiate this claim.\\n\\n### **2. Proposal of an Efficient Method Achieving Significant Performance Improvements (Sec. 4.1 and 4.2)**\\n\\nBased on the discussion in Novelty 1, we proposed a novel regularization method as a practical approach. This method is efficient in terms of both computational and memory costs, yet achieves up to a 10.2-point improvement in accuracy over [1]\\u2019s Linear FT in task addition. \\n\\nThe efficiency of our method is demonstrated in the table below, which shows the costs of fine-tuning, task vector coefficient tuning during merging, and their total. The additional cost of our method compared to Linear FT during fine-tuning is kept minimal. Furthermore, our method achieves significant performance improvements even without task vector coefficient tuning. As a result, the total cost of our method is lower than that of Linear FT while achieving much higher performance.\\n\\nWe recognize that this improvement is not merely incremental but represents a significant and meaningful contribution.\\n\\n| Method | Fine-tuning | | Merging | | Total | Accuracy |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| | Time (min) | Memory / device (GB) (4 GPU used) | Time (min) | Memory (GB) (1 GPU used) | Time (min) | |\\n| Non-lin. FT | 57 | 4.5 | 18 | 1.5 | 75 | 70.4 |\\n| Linear FT | 96 | 6.2 | 37 | 4.8 | 133 | 74.3 |\\n| Ours(without coef. tuning) | 100 | 6.4 | 0 | 0 | 100 | 84.2 |\\n| Ours(with coef. tuning) | 100 | 6.4 | 37 | 4.8 | 137 | **84.5** |\\n\\n### **3. Enhanced Practical Feasibility (Sec. 4.3)**\\n\\nIn Section 4.3, we demonstrated the performance of our method under more constrained scenarios, where significant performance gains over the method in [1] were still observed. Moreover, in the context of Domain Adaptation, methodologies that leverage unlabeled data from the target domain to improve target performance, known as Unsupervised Domain Adaptation (UDA) [2], have been extensively studied. Unlike [1] and other conventional approaches, our work extends UDA methodologies by utilizing unlabeled data from other tasks for the purpose of weight disentanglement. We consider this extension to be another novelty of our study.\\n\\n**In summary**, the above novelties are not minor incremental improvements over [1], but rather substantial mitigations of bottlenecks in the practical application of task arithmetic. We believe our work should be evaluated accordingly.\\n\\nWe would greatly appreciate it if you could reconsider your score in light of these points. If you have any additional concerns, please do not hesitate to let us know.\\n\\n[2] Ganin, Yaroslav, and Victor Lempitsky. \\u201cUnsupervised domain adaptation by backpropagation.\\u201d International Conference on Machine Learning. PMLR, 2015.\"}",
"{\"title\": \"Reply to Authors\", \"comment\": \"Hi, could you provide some results regarding computational cost?\"}",
"{\"title\": \"Response to reviewer hopm\", \"comment\": \"### Regarding Weakness 1\\n> Missing baselines on more recent task arithmetic work: The main tables should include some recent task arithmetic results (e.g. TIES-Merging and AdaMerging) as well as standard single task and MTL baselines (although it is in the appendix), if only to better understand the existing gap in performance.\\n\\nThank you for your valuable suggestions to include additional methods and revise the tables. \\n\\nIn response, we conducted experiments with other existing task arithmetic methods and compared them. Specifically, we added TIES-Merging and AdaMerging to Table 1 for direct comparison. Regarding AdaMerging, its training process for task vector coefficients required significant GPU memory, making it infeasible to implement for all model sizes within our constrained environment. Therefore, we reported its results based on those provided by the original authors under the same experimental settings. Additionally, we included TIES-Merging in other experiments (Tables 2\\u20134), including NLP tasks as well as image tasks, as shown in Tables 3 and 4. Our method outperformed both TIES-Merging and AdaMerging in all cases.\\n\\nAdditionally, we incorporated the performance of MTL and standard single-task models (\\u201cIndividual\\u201d in the tables) into the results.\"}",
"{\"title\": \"Response to reviewer hopm\", \"comment\": \"### Regarding the Question\\n>I find the notion of \\\"One-shot\\\" and \\\"fine-tuned\\\" experimental setting could be improved; First because the notion of fine-tuning can become confusing between the coefficients $\\\\alpha$ vs the model parameters $\\\\theta$ fine-tuning. Second, because it is not clear if it is referring to a specific method/objective for fine-tuning the task coefficients (e.g. AdaMerging or others) or simply hyperparameter search.\\n\\nThank you for your helpful suggestions. We agree with your suggestion and have updated the notation accordingly. Specifically, in Tables 1 and 2, we have changed the column title to \\u201cTask Vector Coef.\\u201d and revised the notation to indicate \\u201c1.0\\u201d for cases without coefficient tuning and \\u201cGrid-searched\\u201d for cases with tuning. We believe this clarification reduces potential confusion regarding the representation of coefficient adjustment and clearly demonstrates that we performed a simple hyperparameter search.\"}",
"{\"title\": \"Response to reviewer rGLy\", \"comment\": \"### Regarding Weakness 3\\n\\n>The paper mentions task addition, task negation, and task analogies in the introduction and background sections as key operations in task arithmetic, but there are no experiments evaluating task analogies. This inconsistency weakens the completeness of the experimental validation.\\n\\nThank you for your thoughtful insights regarding task analogies.\\n\\nThis study is based on the task arithmetic defined in Equation 1 of [3]. As this definition does not account for task analogies, we limited task arithmetic in this work to addition and negation only. To avoid confusion, we have removed references to task analogies from the explanation of task arithmetic and instead added the above explanation as a footnote on page 2 to clarify this point.\\n\\n[3] Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 2023.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"We sincerely appreciate your score update and your openness to further discussions.\\n\\nWe are confident that the limitations you highlighted regarding our method are either acceptable or do not constitute significant limitations. We address these points in the following comments and kindly encourage you to consider further updating your score. Additionally, we welcome further discussions on these topics involving other reviewers and the AC.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"### **Regarding Fair Comparison**\\n>As stated before, my major concern is the unfair comparison with baselines. Existing model merging techniques can be broadly categorized into two main types (Yang et al., 2024): (i) Pre-Merging Methods: These methods focus on enhancing the conditions necessary for effective model merging by optimizing the fine-tuning process of individual models. (ii) During Merging Methods: These approaches address task conflicts and interference through various strategies before executing the parameter merging operations.\\n\\n>The proposed method focuses on both fine-tuning and task conflicts, as a result, all training data across different tasks and additional computational resources are needed. While I do see clear positives in the paper, especially when compared to traditional task merging, I am still on the fence about the novelty/strength of the contribution and where exactly to place it in the literature.\\n\\nAs you correctly noted, our method primarily focuses on \\u201cfine-tuning\\u201d and suppressing \\u201ctask conflicts.\\u201d However, this approach is based on linearized fine-tuning [1], which aligns with the context of **(i) Pre-Merging Methods**, rather than **(ii) During Merging Methods**. Specifically, while [1] belongs to (i), it emphasizes improving weight disentanglement\\u2014hence reducing task conflicts\\u2014through linearized fine-tuning, which is consistent with our objectives.\\n\\nRegarding the fairness of comparisons, we first emphasize that all existing model merging methods, including basic task arithmetic, require labeled data from all tasks during the adjustment of task vector coefficients. In contrast, our method demonstrates superior performance even when all coefficients are fixed at 1.0, without requiring coefficient adjustment. This ensures **fairness during evaluation**.\\n\\nHowever, as you pointed out, there is an inherent lack of fairness in terms of data accessibility during training, as different methods have different levels of access to data (see Table 7 for details). To address this, as suggested, we evaluated the performance of multi-task learning (MTL). This evaluation illustrates the impact of relaxed data accessibility constraints on task arithmetic performance and highlights the potential performance improvements achievable through our method\\u2019s use of unlabeled data.\\n\\nIn summary, the contribution of our method lies in its novel approach within the context of **(i) Pre-Merging Methods**, effectively leveraging unlabeled data during training to significantly suppress task conflicts. This approach mitigates the costs and dependency on labeled data from all tasks required by conventional methods for coefficient adjustment.\\n\\n### **Regarding Bottleneck of Our Method**\\n> Furthermore, it has a bottleneck while the model's size is increasing (e.g. ViT-L-14), I suppose it's because of the performance drop on single-task due to the reg.\\n\\nAdditionally, regarding your concerns about the bottlenecks associated with larger model sizes, it is true that as model size increases, the performance of other methods becomes comparable to ours (particularly with AdaMerging). First, addressing the concern that this is due to a decline in single-task performance caused by our regularization: as shown in Table 4 of Appendix E.1, the addition of our regularization results in little to no performance degradation for individual tasks compared to simple Linear FT (and in some cases, even improves performance).\\n\\nWe believe the issue stems from the fact that as model size increases, Non-linear FT naturally becomes linearized, as demonstrated in Fig. 9 of [1], achieving sufficient linearization without requiring explicit linearized fine-tuning, which can inadvertently have a negative impact. For such large models, further performance improvements could potentially be achieved by using our regularization without explicit linearization.\\n\\n[1] Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 2023.\"}",
"{\"title\": \"Further question\", \"comment\": \"Thanks for the reply, could you please provide the computational cost of both efficient and strict reg so that I could have a clear view? **I double-checked the code online and have the following question: in the penalty iter, the newly introduced data batch and additional jvp product both increase the memory consumption, I suspect nearly double memory compared to Linear FT. Besides, the authors deployed four V100 GPUs which have 64GB VRAM in total, why do the authors leave most memory idle?** Please correct me if I'm wrong.\\n\\nAs stated before, my major concern is the unfair comparison with baselines. Existing model merging techniques can be broadly categorized into two main types (Yang et al., 2024): (i) Pre-Merging Methods: These methods focus on enhancing the conditions necessary for effective model merging by optimizing the fine-tuning process of individual models. (ii) During Merging Methods: These approaches address task conflicts and interference through various strategies before executing the parameter merging operations.\\n\\nThe proposed method focuses on both fine-tuning and task conflicts, as a result, all training data across different tasks and additional computational resources are needed. While I do see clear positives in the paper, especially when compared to traditional task merging, I am still on the fence about the novelty/strength of the contribution and where exactly to place it in the literature. **Furthermore, it has a bottleneck while the model's size is increasing (e.g. ViT-L-14), I suppose it's because of the performance drop on single-task due to the reg.**\\n\\nMeanwhile, I'd like to consider this method as an MTL method, thus some results regarding generalization would strengthen this work, e.g. generalizing to an entirely unseen test set (table 3 from Adamerging). I encourage the authors to include this experiment if time permits.\\n\\nCurrently, I am inclined to keep my rating. **I'm open to reconsidering my score if all the above concerns are addressed.**\\n\\nYang et al. Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities.\\n\\nYang et al. AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR 2024.\"}",
"{\"title\": \"Response to reviewer rGLy\", \"comment\": \"### Regarding Weakness 2\\n\\n>The proposed regularization method lacks a comparison with other existing regularization techniques, which makes it difficult to fully assess its relative strengths and weaknesses.\\n\\nThank you for your suggestions to include additional methods and revise the tables. \\n\\nIn response, we conducted experiments with other existing task arithmetic methods and compared them. Specifically, we added TIES-Merging and AdaMerging to Table 1 for direct comparison. Regarding AdaMerging, its training process for task vector coefficients required significant GPU memory, making it infeasible to implement for all model sizes within our constrained environment. Therefore, we reported its results based on those provided by the original authors [2] under the same experimental settings. Additionally, we included TIES-Merging in other experiments (Tables 2\\u20134), including NLP tasks as well as image tasks, as shown in Tables 3 and 4. Our method outperformed both TIES-Merging and AdaMerging in all cases.\\n\\n[2] Yang, Enneng, et al. \\\"Adamerging: Adaptive model merging for multi-task learning.\\\" arXiv preprint arXiv:2310.02575 (2023).\"}",
"{\"summary\": \"The paper tackles the task of task arithmetics, i.e. how to combine *task vectors/parameters* to form a multi-task model. A key issue is to determine the best combination weights as to minimise interference between tasks, and maximise sharing of information / positive transfer.\\nMore specifically, the authors make use of two previously introduced notions: **(i)** the notion of **weight disentanglement** which was proposed as a measure of task interference in task arithmetic. And **(ii)** the Neural Tangent Kernel (NTK) which designates a training regime where parameter updates can be expressed with a linearised approximation.\\nPrevious works have suggested that performing task arithmetics under the NTK regime can lead to better MTL performance. the authors investigate this behaviour in more depth. Based on this analysis, they also propose a regularisation technique to further reduce task interference when performing task arithmetic, which involves slightly fine-tuning the task vectors themselves.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well motivated and grounded in previous related work\", \"The proposed method is simple and could adapt to different task arithmetic variants\", \"Interesting insights on the link between the proposed regularisation and weight disentanglement\", \"A more efficient implementation of the method is proposed for handling larger number of tasks (Equation 11)\"], \"weaknesses\": [\"Missing baselines on more recent task arithmetic work: The main tables should include some recent task arithmetic results (e.g. TIES-Merging and AdaMerging) as well as standard single task and MTL baselines (although it is in the appendix), if only to better understand the existing gap in performance.\", \"Missing discussion about the extra cost: The paper briefly mentions efficiency of the method (e.g. equation 11 or line 364), however I think this could be discussed in more depth: On the one hand, the proposed method seems more robust to task coefficients $\\\\alpha$, which could save on hyper parameter tuning; On the other hand, it involves a fine-tuning procedure which requires knowledge/access to all tasks simultaneously (Equation 10) as opposed to directly combining task vectors obtained independently from one another.\"], \"questions\": [\"I find the notion of \\\"One-shot\\\" and \\\"fine-tuned\\\" experimental setting could be improved; First because the notion of fine-tuning can become confusing between the coefficients $\\\\alpha$ vs the model parameters $\\\\theta$ fine-tuning. Second, because it is not clear if it is referring to a specific method/objective for fine-tuning the task coefficients (e.g. AdaMerging or others) or simply hyperparameter search.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for the detailed responses and revisions\\u2014they address my concerns well, so I\\u2019ve updated my score!\"}",
"{\"metareview\": \"This paper is concerned with task arithmetic, and to this end the authors propose the $\\\\tau$-Jacobian product ($\\\\tau$Jp) metric which relates to weight disentanglement. They show that by adding a regulariser to minimise this quantity they can improve the performance of task arithmethic in practice.\\n\\nIn terms of strengths, reviewers thought this work was well-motivated, and the $\\\\tau$Jp metric was seen as \\\"novel\\\", \\\"well-motivated\\\", \\\"reasonable\\\", and \\\"interesting\\\". The proposed regularisation method was well-received for its practical benefits (it eliminates the need for tuning hyperparameters at inference time), and its simplicity. The experiments and results were lauded.\\n\\nFor weaknesses, there were concerns about a lack of novelty in comparison to [1], a lack of theory behind the success of the proposed regulariser, computational cost, and a lack of experiments in terms of baselines and non-image domains.\\n\\nThe authors did well in responding to reviews, and provided additional experiments. As 3/4 of the reviews were borderline (8,6,5,5) post-rebuttal I encouraged the reviewers to have a discussion to move towards a more decisive opinion. While none of the borderlines explicitly declared either way, the resulting discussion was useful for my decision-making. Two reviewers echoed their appreciation of this paper in terms of the experiments and practical benefits, with another reiterating a lack of theoretical novelty.\\n\\nI think the strengths of this paper outweigh the weaknesses, and it provides significant added value from [1]. My recommendation is for acceptance (poster).\\n\\n[1] Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 2023.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer hopm believed that certain experiments and discussions were missing from the paper. These were provided by the authors and they raised their score from 6->8. Reviewer onTx had a good back-and-forth with the authors, requesting additional experiments and results of computational cost. The authors did well at responding to these, and the reviewer raised their score from 3->5 which to me was a significant achievement given their initial stance. Reviewer rGLy had concerns (including a comparison of regularisers) which were addressed by the authors, raising their score from 5->6. After an exchange, Reviewer n6ZH remained unconvinced that this work was a significant advancement of [1] so kept their score as a 5. I note that the last two reviewers mentioned had low confidence scores (2). I think the authors did a great job at convincing three reviewers to up their scores. Most concerns were addressed apart from issues of novelty relating to [1] but as the experiments and method presented were so well-appreciated I believe this is a significant enough improvement to warrant acceptance.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"### Regarding Computational Cost\\n\\n>Hi, could you provide some results regarding computational cost?\\n\\nThank you for your prompt response. Regarding the computational cost of our method, we would like to address your concern.\\n\\nAs mentioned in our previous reply under \\u201cRegarding Weakness 2 and Question 1,\\u201d we provided a detailed explanation including the runtime per iteration (Sec. / Iter.) and included results in Table 6 of Appendix C. For clarity, we present the table below:\\n\\n| Method | Abs. (\\u2191) | Norm. (\\u2191) | Sec. / Iter. (\\u2193) |\\n| ---- | :---: | :---: | :---: |\\n| No reg. (Linear FT) | 74.3 |85.0 |0.361 |\\n| Ours (Eq. 11) | 84.5 |97.6 |0.374 |\\n\\nThese results are based on the task addition experiment using ViT-B-32 shown in Table 1. The runtime increase introduced by our regularization term is approximately 0.01 seconds per iteration, which is minimal.\\n\\nWe hope this addresses your concern and clarifies our approach. Thank you again for your valuable feedback and for helping us improve the quality of our paper. If you have any additional questions or concerns, please feel free to let us know.\"}",
"{\"title\": \"Updated\", \"comment\": [\"Hi, thanks for your detailed responses, I'd like to update my score to a marginal score due to some limitations of the proposed method. I'm open to discussing this further with other reviewers and AC.\", \"The proposed method is more like a pre-merging method, however, it requires additional training data compared to baselines.\", \"While the superior performance has been achieved, we cannot neglect the additional computational cost and its bottleneck regarding model size. A more generalized and efficient method would be a benefit for the society.\"]}",
"{\"title\": \"Further question\", \"comment\": [\"Thanks for the detailed reply, here are some further questions:\", \"While additional data are unlabeled, seeing test samples seems unfair compared to other baselines except for Adamerging; Could you provide some comparisons to other baselines, e.g. traditional MTL baselines which have access to all test samples?\", \"Could you provide results regarding memory consumption in addition to iteration time?\"]}",
"{\"title\": \"Discussion\", \"comment\": \"Dear reviewers,\\n\\nThe authors have responded to your reviews. \\n\\nUntil November 26th @ 2359 (AOE time) reviewers and authors can freely exchange responses, so if there any clarifications you require from the authors, now is the time to seek them!\\n\\nBest,\\n\\nAC\"}",
"{\"title\": \"Response to reviewer onTx\", \"comment\": \"### Regarding Weakness 1 and Question 2\\n>The method requires access to data from all other tasks during training, which is often unavailable in realistic task arithmetic scenarios. This limits the practical applicability of the approach.\\n\\n>Can the method be adapted to work with limited or no access to data from other tasks?\\n\\nThank you for raising this important concern and for your thoughtful question.\\n\\nOur method does indeed require access to datasets from other tasks; however, we clarify that, in the case of classification tasks, only access to unlabeled data is necessary (see Table 7). This naturally extends the context of Unsupervised Domain Adaptation (UDA) [1], where access is limited to the data distribution x of other domains. By utilizing this access, our method learns task vectors in orthogonal directions to $\\\\nabla_{\\\\theta}f(x, \\\\theta_0)$ of other tasks, enhancing model weight disentanglement. Unlike other methods, which cannot effectively utilize the data distribution x in these scenarios, our approach provides an effective learning strategy under these conditions.\\n\\nFurthermore, in Section 4.3, we demonstrate that our method remains effective even in scenarios where access is limited to specific tasks or where the amount of accessible data from other tasks is constrained. Even minimal regularization-based learning with a few steps significantly improves task arithmetic performance, highlighting the robustness of our approach under these conditions.\\n\\n[1] Ganin, Yaroslav, and Victor Lempitsky. \\\"Unsupervised domain adaptation by backpropagation.\\\" International conference on machine learning. PMLR, 2015.\"}",
"{\"summary\": \"This paper proposes a new metric called $\\\\tau$Jp ($\\\\tau$-Jacobian product) for measuring weight disentanglement in task arithmetic operations on neural networks. The authors theoretically analyze the relationship between $\\\\tau$Jp and interference between tasks, and introduce a regularization method based on minimizing $\\\\tau$Jp during fine-tuning. Experiments on image classification tasks demonstrate improved performance and reduced need for hyperparameter tuning compared to existing methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper provides a comprehensive theoretical and empirical study of the relationship between the proposed $\\\\tau$Jp metric and task interference in neural networks.\\n\\n2. The introduction of $\\\\tau$Jp as a new metric for weight disentanglement is novel and well-motivated.\\n\\n3. The proposed regularization method eliminates the need for tuning inference-time hyperparameters ($\\\\alpha$), which is a practical advantage.\", \"weaknesses\": \"1. The method requires access to data from all other tasks during training, which is often unavailable in realistic task arithmetic scenarios. This limits the practical applicability of the approach.\\n\\n2. The computational cost of calculating \\u03c4Jp is likely very high, as it involves multiple Jacobian-vector products. The paper does not report runtime or resource requirements, making it difficult to assess scalability.\\n\\n3. Experiments are limited to image classification tasks. Evaluation on other domains like language tasks would strengthen the claims of generality.\\n\\n4. The derivation of Equation 7 from the weight disentanglement definition is non-trivial and should be explained more clearly.\", \"questions\": [\"How does the computational cost of the proposed method compare to existing approaches?\", \"Can the method be adapted to work with limited or no access to data from other tasks?\", \"How well does the approach generalize to other domains beyond image classification?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer n6ZH\", \"comment\": \"### Regarding Weakness 1\\n\\n>The novelty of this paper may be limited. My consideration is that this paper seems to fundamentally align with the approach proposed by Ortiz-Jimenez et al. (2023) [1], which also emphasizes fine-tuning models in the tangent space. Although using the specific regularization term, this paper does not sufficiently differentiate itself from this existing work.\\n\\nThank you for your valuable insights regarding the novelty of our contributions.\\n\\nIn [1], it was experimentally shown that model linearization improves weight disentanglement. Furthermore, in their discussion on kernel methods, they theoretically demonstrated that the localization of kernel eigenfunctions to specific tasks leads to weight disentanglement. However, their work does not provide a methodological explanation for achieving task localization of the kernel, leaving their findings as a result-oriented discussion without a clear procedural basis.\\n\\nIn contrast, our work not only incorporates model linearization but also theoretically establishes that reducing the $\\\\tau_{\\\\text{Jp}}$ term corresponding to other tasks in the NTK approximation is necessary for achieving weight disentanglement (see Section 3). We further demonstrate experimentally that the magnitude of $\\\\tau_{\\\\text{Jp}}$ correlates with weight disentanglement error and normalized accuracy (see Figures 1 and 2). This discussion provides a necessary condition for weight disentanglement and shows, both theoretically and experimentally, that explicitly introducing a regularization term to constrain $\\\\tau_{\\\\text{Jp}}$ during fine-tuning improves weight disentanglement.\\n\\nOur contribution lies not only in achieving performance improvements through regularization but also in unveiling the causal mechanism underlying task arithmetic and weight disentanglement. This work extends the methodological discussion in a way that enhances the understanding of these mechanisms and their implications.\"}",
"{\"title\": \"Response to reviewer hopm\", \"comment\": \"### Regarding Weakness 2\\n\\n>Missing discussion about the extra cost: The paper briefly mentions efficiency of the method (e.g. equation 11 or line 364), however I think this could be discussed in more depth: On the one hand, the proposed method seems more robust to task coefficients \\n, which could save on hyper parameter tuning; On the other hand, it involves a fine-tuning procedure which requires knowledge/access to all tasks simultaneously (Equation 10) as opposed to directly combining task vectors obtained independently from one another.\\n\\nWe greatly appreciate your insightful and constructive feedback. Below, we address your points in detail.\\n\\n**Efficiency.** \\nFirst, regarding the efficiency of our approach, the loss function shown in Equation 11 reduces the computational cost compared to Equation 10. Specifically, in Equation 10, the regularization term grows with the number of tasks considered at each iteration, whereas in Equation 11, the regularization is computed for only a single task at each iteration. This ensures that the computational cost per iteration remains constant, regardless of the number of tasks, making it a scalable method. As shown in Appendix C, we compared the runtime per iteration and found that Equation 11 achieves approximately an 80% reduction in runtime compared to Equation 10, while maintaining comparable performance. Furthermore, the runtime of Equation 11 is nearly identical to that of Linear FT (without regularization), while achieving significant performance improvements. These results demonstrate that Equation 11 is an efficient and highly effective approach.\\n\\nRegarding the tuning of task vector coefficients, as you correctly pointed out, our method achieves strong performance, particularly in task addition, without requiring coefficient tuning. This eliminates the need for manual adjustment. Traditional methods that rely on coefficient tuning suffer from increased computational costs as the number of tasks, dataset size, or parameter count grows, limiting their scalability. In contrast, our method excels in scalability by addressing this issue.\\n\\n**Access to Other Tasks During Fine-Tuning.** Next, regarding the requirement for access to other tasks during fine-tuning, we clarify that while our method does require access to datasets from other tasks, in the case of classification tasks, it only needs access to unlabeled data (See Table 7). This aligns naturally with the context of Unsupervised Domain Adaptation (UDA) [1], where access to the data distribution x of other domains is leveraged. By utilizing this access, our method learns task vectors in orthogonal directions to $\\\\nabla_{\\\\theta}f(x, \\\\theta_0)$ of other tasks, enhancing model weight disentanglement. Unlike other methods that cannot effectively utilize data distributions x under such scenarios, we propose a viable approach for learning in these conditions.\\n\\nFurthermore, in Section 4.3, we demonstrate that our method remains effective even in scenarios where access is limited to specific tasks or where the amount of accessible data from other tasks is constrained. Even minimal regularization-based learning with a few steps significantly improves task arithmetic performance, highlighting the robustness of our approach under these conditions.\\n\\n[1] Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. International Conference on Machine Learning (ICML). PMLR.\"}",
"{\"title\": \"Reply to reviewer hopm\", \"comment\": \"Thank you for taking the time to review our work and providing constructive feedback. If you have any further questions or comments, please do not hesitate to let us know.\"}",
"{\"title\": \"Response to reviewer n6ZH\", \"comment\": \"We sincerely appreciate the time you have dedicated to reviewing our work. We are fully committed to revising our paper in response to your feedback, so please feel free to share any additional concerns or suggestions.\"}",
"{\"title\": \"Further discussion 3\", \"comment\": \"### **Model size bottleneck**\\n| Method | Task vector coef. | ViT-B-32 Abs. (\\u2191) | ViT-B-32 Norm. (\\u2191) | ViT-B-16 Abs. (\\u2193) | ViT-B-16 Norm. (\\u2191) | ViT-L-14 Abs. (\\u2193) | ViT-L-14 Norm. (\\u2191) |\\n| --- | --- | :---: | :---: | :---: | :---: | :---: | :---: |\\n| Non-lin. FT | 1.0 | 19.9 | 20.5 | 19.1 | 19.7 | 37.6 | 39.0 |\\n| Non-lin. FT | Grid-searched | 70.4 | 78.0 | 75.5 | 81.5 | 84.0 | 89.3 |\\n| Linear FT | 1.0 | 55.4 | 61.7 | 58.2 | 63.6 | 80.5 | 86.7 |\\n| Linear FT | Grid-searched | 74.3 | 85.0 | 78.7 | 87.6 | 85.8 | 92.8 |\\n| Ties-Merging ) | 1.0 | 74.2 | 84.8 | 78.6 | 87.6 | 85.0 | 91.9 |\\n| Ties-Merging | Grid-searched | 74.2 | 84.8 | 78.6 | 87.6 | 85.0 | 91.9 |\\n| AdaMerging | Trained | 80.1 | 88.5 | 84.9 | 92.1 | **90.8** | 96.4 |\\n| **Ours** | 1.0 | 84.2 | 97.2 | 87.5 | 98.4 | **90.8** | **99.0** |\\n| **Ours** | Grid-searched | **84.5** | **97.6** | **87.6** | **98.5** | **90.8** | **99.0** |\\n\\nIf by \\u201cbottleneck regarding model size,\\u201d you are referring to the diminishing improvements of our method over existing approaches as model size increases, we would like to highlight the following points:\\n\\nFirst, as previously noted, this phenomenon is not caused by any degradation in single-task performance due to the introduction of our regularization. This has been clearly demonstrated in Appendix E.1 (Table 4), where the addition of our regularization shows minimal or no degradation in single-task performance.\", \"the_possible_reasons_for_this_phenomenon_include_the_following\": \"1.\\t**Performance Saturation Observed Across Methods**\\n\\nThe diminishing improvement margins from existing methods (e.g., Grid-searched Non-lin. FT) as model size increases are not unique to our approach. Similar behavior is observed in other existing methods, such as AdaMerging and TIES-Merging, and this is a well-known phenomenon in the ML community. For instance, when applying a novel method to train MNIST, significant improvements can be observed with a small model like LeNet-5, while the improvement margins are much smaller with a large model like ViT-Large. This is due to performance saturation, a common occurrence as models approach their capacity limits, and it is not specific to our method.\\n\\n2.\\t**Natural Linearization of Non-lin. FT in Large Models**\\n\\nAs model size increases, Non-lin. FT naturally becomes linearized, achieving sufficient linearization without requiring explicit linearized fine-tuning. In such cases, explicit linearization, which can sometimes involve performance degradation, becomes unnecessary and may even have adverse effects. For extremely large models, we anticipate further performance improvements by applying our regularization without relying on explicit linearized fine-tuning.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"We appreciate your additional questions, which contribute to improving the quality of our paper.\\n\\n### **Regarding Fair Comparison**\\n\\n> While additional data are unlabeled, seeing test samples seems unfair compared to other baselines except for Adamerging; Could you provide some comparisons to other baselines, e.g. traditional MTL baselines which have access to all test samples?\\n\\nFirst, we clarify that when using unlabeled data from other tasks during fine-tuning, we rely on training data, not test data. Therefore, the data used during fine-tuning does not directly overlap with the data used for evaluation.\\n\\nNonetheless, our method differs from other fine-tuning approaches, such as Non-linear FT and Linear FT, in the range of accessible data during fine-tuning. To address this, we have already included the results of traditional MTL (\\u201dMTL\\u201d) in Table 1. While MTL achieves very high absolute accuracy by leveraging labeled data from all tasks, our method demonstrates comparable performance despite being restricted to accessing only unlabeled data.\\n\\nRelated to this discussion, we provide additional insights on data accessibility during fine-tuning in Appendix D (see especially Table 7). Briefly, while MTL achieves high multi-task performance, it requires labeled data for all tasks during fine-tuning and lacks the flexibility to add new capabilities or modify existing ones without forgetting prior knowledge. In contrast, our method retains the inherent flexibility of task arithmetic while utilizing unlabeled data from other tasks during fine-tuning, achieving performance comparable to MTL.\\n\\n### **Regarding Memory Consumption**\\n\\n> Could you provide results regarding memory consumption in addition to iteration time?\\n\\nBelow, we present the results of the GPU memory consumption increase caused by the addition of our regularization term in the experiments from Table 1, broken down by model size. The peak size of allocated memory is reported in gigabytes (GB). As stated in the paper, gradient accumulation is used for ViT-L-14. As previously demonstrated in our response, the efficient implementation of the regularization ensures that the increase in memory consumption is kept minimal.\\n\\n| Method | ViT-B-32 | ViT-B-16 | ViT-L-14 |\\n| :--------------------- | :--------: | :--------: | :--------: |\\n| No reg. (Linear FT) | 6.18 | 13.38 | 15.00 |\\n| Ours | 6.38 | 13.69 | 15.71 |\\n| **Increase** | 0.20 (+3.2%) | 0.31 (+2.3%) | 0.71 (+4.7%) |\\n\\nWe have addressed all your questions and also improved the presentation for those responses that may have been overlooked. We would greatly appreciate it if these responses could be taken into account when updating your score. Of course, if you have any further questions, please do not hesitate to ask.\"}",
"{\"title\": \"Reply to reviewer onTx\", \"comment\": \"Thank you for your feedback.\\n\\n### **Computational Cost of Our Method is Comparable to Other Methods**\\n\\nWe directly implemented the AdaMerging code provided by the authors (they use NVIDIA RTX 3090 with 24GB of device memory) on their GitHub repository in our environment. However, on our single device (NVIDIA V100 with 16GB of device memory), the implementation resulted in an Out of Memory (OOM) error.\\n\\nFrom a runtime perspective, as reported by the authors in their paper, achieving sufficient performance with AdaMerging requires coefficient tuning, which adds an additional 125 minutes (even on higher-quality GPUs than ours) compared to conventional task arithmetic (see Tab. 12 of [2]). This makes the total runtime for AdaMerging approximately twice as long as our method.\\n\\nBased on the above observations, we are confident that the computational cost of our method is **comparable to, or even more efficient than, other methods**. Therefore, we do not consider it a weakness of our approach.\\n\\n### **Model size bottleneck**\\n\\n>Furthermore, it has a bottleneck while the model's size is increasing (e.g. ViT-L-14), I suppose it's because of the performance drop on single-task due to the reg.\\n\\nFirst, as previously mentioned, we have already refuted your suggestion above. Adding our regularization does not significantly change single-task accuracy; in some cases, it even improves it. \\n\\nSecond, **we respectfully disagree with the assessment** that our method has a bottleneck due to limited performance improvements in scenarios where large models already saturate performance. In such environments, there is inherently little room for improvement, and attributing this to a weakness of our method is not fair.\\nFor example, many widely accepted and commonly used techniques, such as regularization and data augmentation, which are considered broadly beneficial to communities, do not improve performance (or have no room for improvement) in such environments.\\n\\nOur approach remains **highly effective for applications with limited model sizes or more challenging tasks where performance is not saturated**. In these situations, our method achieves benchmarks that other methods do not reach. Even when the problem becomes easier due to high model expressiveness, incremental improvements are still valuable for many applications. While we agree that adding methods with additional computational costs may not be necessary in saturated settings, our method offers superior performance over existing techniques with less computational and memory overhead than alternatives like AdaMerging.\\n\\nIn light of these considerations, we kindly request that you reconsider your confidence and score.\\n\\n[2] Yang et al. AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR 2024.\"}",
"{\"title\": \"Further discussion 2\", \"comment\": \"### **Computational and memory cost**\\n\\nAs we have demonstrated, the increase in runtime and memory consumption caused by the addition of our regularization term during training is kept practical. Furthermore, during testing, our method achieves the highest performance without requiring coefficient tuning. In contrast, other methods rely on coefficient adjustment and access to labeled data from all tasks, leading to additional costs. We would like to emphasize that our method eliminates these costs and accessibility requirements at merging time.\\n\\nTo provide a more detailed analysis, we present a comparison of runtime and memory consumption during Pre-merging and Merging stages in the table below. The runtime during Pre-merging was calculated based on the per-iteration runtime, while the Merging runtime for AdaMerging was taken from the authors\\u2019 reported results (note that a GeForce RTX 3090* was used in their experiments). While Linear FT and our method require linearized fine-tuning during Pre-merging, which takes approximately twice the time of standard fine-tuning, **the computational overhead introduced by our regularization (Ours - Linear FT) remains minimal at around 4%.** On the other hand, during the merging stage, other methods, particularly AdaMerging, incur significantly higher computational and memory costs. **In contrast, our method, especially without coefficient tuning, requires no merging costs, resulting in lower total costs while outperforming all other methods.**\\n\\n*GeForce RTX 3090, which has been reported to outperform the Tesla V100 we used in terms of FP32 FLOPS and other benchmarks, provides a significant advantage for AdaMerging in the comparison of the merging time. Despite this, our method is still faster.\\n\\nAdditionally, as highlighted in Appendix D, task arithmetic is increasingly desired in applications such as multi-agent systems and personalized recommendation systems, where multiple task vectors can represent diverse models. For methods requiring coefficient tuning, the merging cost becomes increasingly dominant over pre-merging costs as the number of models to be represented grows. **In contrast, our method, which does not require coefficient tuning, maintains a constant total cost corresponding to the pre-merging stage, regardless of the number of models represented.** This indicates that our method offers highly efficient model editing in such scenarios.\\n\\n| Method | Pre-merging (Time min) | Pre-merging (Memory GB / device) | Merging (Time min) | Merging (Memory GB) | Total Time (min) | Accuracy (%) |\\n|-------------------------|:------------------------:|:-----------------------------------:|:--------------------:|:---------------------:|:------------------:|:-------------:|\\n| Non-lin. FT | 57 | 4.5 | 18 | 1.5 | 75 | 70.4 |\\n| Linear FT | 96 | 6.2 | 37 | 4.8 | 133 | 74.3 |\\n| AdaMerging | 57 | 4.5 | 143 | >16 (OOM)* | 200 | 80.1 |\\n| Ours (w/o coef. tuning) | 100 | 6.4 | 0 | 0 | 100 | 84.2 |\\n| Ours (w/ coef. tuning) | 100 | 6.4 | 37 | 4.8 | 137 | **84.5** |\\n\\n*OOM = Out of Memory on our device.\\n\\n\\nBased on the arguments above, we are open to further discussions, involving other reviewers and the AC, regarding whether the computational and memory costs associated with our regularization term are significant or negligible. We welcome these discussions and are eager to provide additional insights.\"}",
"{\"title\": \"Response to reviewer n6ZH\", \"comment\": \"### Regarding the Question\\n\\n>Could the proposed regularization affect the model's plasticity? Specifically, how might the addition of this regularization impact the fine-tuning performance, potentially influenced by the strength of the regularization?\\n\\nWe appreciate your question regarding the impact of our regularization on fine-tuning performance.\\n\\nThe impact of regularization on fine-tuning performance is shown in Figure 4 in the Appendix E.1. Compared to Linear FT (before regularization), our method (Ours) maintains comparable performance. The performance gap between Linear FT (including Ours) and Non-linear FT arises from the *non-linear advantage* [3], where fine-tuning in the non-linear regime achieves higher performance due to the richer expressivity of the non-linear loss landscape.\\n\\n[3] Fort, Stanislav, et al. \\u201cDeep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel.\\u201d *Advances in Neural Information Processing Systems* 33 (2020): 5850-5861.\"}",
"{\"title\": \"Response to reviewer rGLy\", \"comment\": \"### Regarding Weakness 1\\n>While the paper introduces the $\\\\tau Jp$ metric and explains its relationship with weight disentanglement, the theoretical justification for why $\\\\tau Jp$ regularization effectively reduces task interference could be further elaborated.\\n\\nThank you for your insightful feedback.\\n\\nOur regularization term is defined as $||(\\\\theta - \\\\theta_0)^T \\\\nabla_\\\\theta f(\\\\theta_0, x_{\\\\text{other}})||^2 $, designed to encourage the task vector ($\\\\theta - \\\\theta_0$) to be orthogonal to the output gradient of other tasks $\\\\nabla_\\\\theta f(\\\\theta_0, x_{\\\\text{other}})$ in the pre-trained model (see Section 4.1 for details). Similar approaches, which aim to orthogonalize model weights to a specific vector by incorporating the L2 norm of their inner product as a regularization term, have been explored in previous studies (e.g., [1]) and have demonstrated their effectiveness. Our method is based on a similar idea, and our experimental results confirm that the task vector is effectively guided to be orthogonal to $\\\\nabla_\\\\theta f(\\\\theta_0, x_{\\\\text{other}})$. This effect, as a result, keeps $ \\\\tau_{\\\\text{Jp}}$, and the relationship between this reduction, improved weight disentanglement, and enhanced task arithmetic performance is elaborated in Section 3.\\n\\n[1] Wang, Xiao, et al. \\\"Orthogonal subspace learning for language model continual learning.\\\" arXiv preprint arXiv:2310.14152 (2023).\"}",
"{\"title\": \"Response to reviewer onTx\", \"comment\": \"### Regarding Weakness 2 and Question 1\\n>The computational cost of calculating \\u03c4Jp is likely very high, as it involves multiple Jacobian-vector products. The paper does not report runtime or resource requirements, making it difficult to assess scalability.\\n\\n>How does the computational cost of the proposed method compare to existing approaches?\\n\\nThank you for your insightful comments. Below, we provide details regarding the computational complexity, runtime, and required resources for our proposed method.\\n\\nFirst, as you correctly pointed out, the computation of the regularization term involves Jacobian calculations. However, this can be efficiently performed using Jacobian-vector products (JvPs), which are generally computed with the same complexity as a forward pass through forward-mode automatic differentiation [2]. To further improve computational efficiency, we used a batch size of $\\\\frac{1}{8}$ of the batch size used for computing the loss on the target task for vision tasks, and $\\\\frac{1}{4}$ for NLP tasks. This ensures that the computation of the regularization term has minimal impact on the overall computational cost of the forward pass.\\n\\nThe actual runtime and corresponding performance results are shown in Table 6 of Appendix C. We have also included a baseline for the updated results, which corresponds to the case without any regularization (No reg., i.e., Linear FT). While computing the regularization term for all other tasks at every iteration (as in Eq. 10) is highly effective, it comes with a significant computational cost. To address this limitation, we introduced cyclical regularization, where only one task is considered for regularization at each iteration, and tasks are cycled through. This approach successfully reduces runtime by approximately 80% while maintaining comparable performance. The runtime for cyclical regularization is nearly identical to that of Linear FT (no regularization) while achieving substantial performance improvements. Furthermore, the runtime of cyclical regularization does not depend on the number of tasks being considered, demonstrating its scalability.\\n\\nAdditionally, as discussed in [3], linearizing fine-tuning generally incurs approximately 2\\u20133 times the computational cost compared to traditional Non-lin. FT. This is a broader challenge within the context of linear fine-tuning, not limited to our method. However, as outlined in Section 6, approaches such as LoRA and other parameter-efficient techniques have been shown to reduce the computational cost of linearized fine-tuning. This opens opportunities for further improving the efficiency of our method. Whether our regularization approach retains its effectiveness when combined with such efficient methods remains an open question and is left as future work.\\n\\nBased on these findings, we conclude that our proposed method operates within a scalable computational budget and is practical for implementation and experimentation with realistic runtime and resource requirements.\\n\\n[2] Baydin, Atilim Gunes, et al. \\u201cAutomatic differentiation in machine learning: a survey.\\u201d *Journal of Machine Learning Research* 18.153 (2018): 1-43.\\n\\n[3] Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 2023.\"}",
"{\"summary\": \"The paper presents a novel approach to task arithmetic in neural networks, which leverages a novel metric that quantifies the relationship between task vectors and the Jacobian of pre-trained models. The authors claim that by minimizing this metric through regularization, they can significantly reduce interference between task predictions and enhance the accuracy of task arithmetic operations. The experimental results demonstrate substantial improvements in performance for both task addition and task negation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is well-written and easy to follow.\", \"The experiments are extensive, and the results sound good.\", \"The design of metric \\u03c4Jp is reasonable and interesting.\"], \"weaknesses\": [\"The novelty of this paper may be limited. My consideration is that this paper seems to fundamentally align with the approach proposed by Ortiz-Jimenez et al. (2023) [1], which also emphasizes fine-tuning models in the tangent space. Although using the specific regularization term, this paper does not sufficiently differentiate itself from this existing work.\", \"While the empirical results are compelling, the paper lacks a thorough theoretical explanation for why the proposed regularization leads to better performance compared to other methods, such as those discussed in Ortiz-Jimenez et al. (2023). I am confused about why a simple and soft regularization results in such improvement compared to [1]. A deeper theoretical analysis could strengthen the paper's contributions.\", \"The authors briefly mention tuning the regularization strength but do not provide sufficient details on how this hyperparameter was selected. The sensitive analysis of this hyperparameter is also necessary for the paper.\", \"[1] Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 2023.\"], \"questions\": \"Could the proposed regularization affect the model's plasticity? Specifically, how might the addition of this regularization impact the fine-tuning performance, potentially influenced by the strength of the regularization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}"
]
} |
1V28zvLJMg | Debiased Deep Evidential Regression for Video Temporal Grounding | [
"Kaijing Ma",
"Haojian Huang",
"Jin Chen",
"Haodong Chen",
"Xianghao Zang",
"Han Fang",
"Chao Ban",
"Hao Sun",
"Mulin Chen",
"Xuelong Li"
] | Existing Video Temporal Grounding (VTG) models perform well in accuracy but often fail to address open-world challenges posed by open-vocabulary queries and out-of-distribution (OOD) videos, which can lead to unreliable predictions. To address uncertainty, particularly with OOD data, we build a VTG baseline using Deep Evidential Regression (DER), which excels in capturing both aleatoric and epistemic uncertainty. Despite promising results, our baseline faces two key biases in multimodal tasks: (1) Modality imbalance, where uncertainty estimation is more sensitive to the visual modality than the text modality; (2) Counterintuitive uncertainty, resulting from excessive evidence suppression in regularization and uneven sample error distribution in conventional DER. To address these, we propose an RFF block for progressive modality alignment and a query reconstruction task to enhance sensitivity to text queries. Additionally, we introduce a Geom-regularizer to debias and calibrate uncertainty estimation. This marks the first extension of DER in VTG tasks. Extensive experiments demonstrate the effectiveness and robustness of our approach. Our code will be released soon. | [
"Video Temporal Grounding",
"Uncertainty Quantification",
"Multi-Modal Fusion",
"Deep evidential regression",
"Evidential deep learning"
] | Reject | https://openreview.net/pdf?id=1V28zvLJMg | https://openreview.net/forum?id=1V28zvLJMg | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"z4OOk894Pk",
"ycCnXPtIY3",
"yEI3KwVao9",
"yCMQ7umEgt",
"wFUWjRfYYB",
"vvEYtvHAuV",
"tpZJ3jnLXR",
"sMcgrhoPNR",
"rlYEyuR0zL",
"qgjh29Ig0t",
"qfJcCcaytb",
"oGZ6hSeMnZ",
"lxMDkwIH3R",
"k7dkBHd3KS",
"hxzyv242sO",
"hFT11obCvq",
"hEJJJM4WmV",
"gBkqMlAwyr",
"aXBUQyQGIT",
"ZI4xeb3JCP",
"ZBXVs6QTNx",
"YaiqwntmgA",
"XplmeToTfG",
"UbiPoZf33t",
"UWR8FXjWgI",
"UUfWcN1XyN",
"S5uoSHbWbb",
"S1SIY92p3F",
"RqFz8lv1zd",
"RYvRIlQkvw",
"RMlVr6iLih",
"RDCYgCIASV",
"O6arPoLGr9",
"NRUM4Nk5sX",
"N6FNDuHDBZ",
"L6HmI1cr4p",
"JvdpMeCEZX",
"J0nP1LVR7v",
"EV9e1HkVem",
"E2aAxp47WA",
"DoS51UcBfb",
"DWpBFP1EKc",
"CprTbDgbuG",
"9kfkO4fl4X",
"8VIi6Lppum",
"5ctn8ANDHb",
"5Z5SGaSjQ6",
"5RQLfp5Akk",
"3Rn888ZO6h",
"3NPeZcziMJ",
"3FJYfQ9c63",
"2jso0I0bjO",
"1yLQcywXFv"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732293519669,
1732294272143,
1730606673787,
1732295348809,
1733204184041,
1733226847482,
1732440880960,
1730207272368,
1732812235482,
1732812407350,
1732351018251,
1733147401548,
1730535209945,
1732351251390,
1732256363901,
1733146382681,
1737523765277,
1732255065454,
1732518633661,
1732518609980,
1733149969176,
1734680152226,
1732103183314,
1732103493258,
1732256681830,
1732293961167,
1732257015166,
1732293723741,
1732812754879,
1732103534173,
1733146109636,
1733210746374,
1732103108193,
1732440939480,
1730713111512,
1732255186067,
1732103389440,
1732294751928,
1733150003680,
1732440765829,
1732440307769,
1732428396062,
1732356847226,
1732440916074,
1732351433214,
1733227014242,
1732351789536,
1733212250209,
1732519930293,
1732294133038,
1732518568037,
1732295179049,
1732351552727
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_QN6J"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_yg5S"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_FPPU"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_FPPU"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_FPPU"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Area_Chair_u3dz"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_FPPU"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_QN6J"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_MSwQ"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Reviewer_yg5S"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6369/Authors"
]
],
"structured_content_str": [
"{\"title\": \"1. Regarding the readability of Figure 3 (Addressing Weakness 1)\", \"comment\": \"To improve readability and better highlight the main innovation of the Geom-Regularizer in uncertainty modeling, we are revising Figure 3 to clarify the distinction between different components. The updated version will be included in the final manuscript.\"}",
"{\"title\": \"5. On spelling and grammatical errors (Addressing Weakness 5)\", \"comment\": \"We sincerely thank the reviewers for their careful reading. All identified spelling and grammatical errors will be corrected in the final submission.\"}",
"{\"summary\": \"The paper presents a novel approach to Video Temporal Grounding (VTG) by integrating Deep Evidential Regression (DER) to address uncertainties in open-world scenarios, such as out-of-distribution (OOD) data and open-vocabulary queries. The authors propose a Debiased DER Model for Video Temporal Grounding (DDM-VTG) that tackles modality imbalance and counterintuitive uncertainty through a Reflective Flipped Fusion (RFF) block, a query reconstruction task, and a Geom-regularizer. The model demonstrates effectiveness and robustness across multiple benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe proposed baseline model is innovative for its integration of Deep Evidential Regression (DER) with VTG tasks to address both aleatoric and epistemic uncertainties.\\n2.\\tThe paper not only identifies the existence of modal imbalance and structural flaws in regularization within the baseline model but also offers solutions to these issues.\\n3.\\tThe authors have conducted extensive experiments across various benchmarks, which effectively demonstrate the efficacy of their approach.\", \"weaknesses\": \"1.\\tWhile the paper presents a novel approach to addressing uncertainties in VTG, it could benefit from a deeper analysis of the limitations of the proposed model, especially in handling highly ambiguous queries or extremely OOD data.\\n2.\\tThe paper could provide more insights into how the DDM-VTG model generalizes to other video-related tasks beyond the tested benchmarks.\\n3.\\tWhen designing the baseline, whether DER provides positive assistance for the correct prediction of the model, the author needs to provide corresponding proof experiments.\\n4.\\tWhen introducing the baseline, the author believes that it has a modal imbalance problem, and DDM-VTG effectively alleviates this imbalance, which requires corresponding experimental evidence.\\n5.\\tThe method proposed by the author showed out of distribution predictions on the qv height dataset, which to some extent indicates the generalization of DDM-VTG, but it is not clear and specific enough. The author needs to provide results on charades-CD.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"8. On inference for longer videos (Addressing Question 3)\", \"comment\": \"The applicability of our proposed uncertainty estimation method to longer videos is inherently linked to the length of videos seen during training. If the model is trained on datasets with long videos, it is reasonable to believe that the model has the potential to generalize to even longer videos.\\n\\nAs for the computational cost, it is primarily determined by the backbone architecture. Since our focus is on achieving reliable uncertainty estimation, we maintain consistency with existing VTG methods in feature extraction. Thus, the computational complexity is largely driven by the quadratic time complexity of the transformer, which makes inference on long videos inherently resource-intensive. \\n\\nHowever, with the continuous advancements in long video research, we are optimistic that our proposed method can be scaled up to support long video inference with reduced computational cost in the future.\"}",
"{\"title\": \"Supplementary Discussion on Q1\", \"comment\": \"We would like to thank you for your valuable feedback. In response to your concerns, we would like to clarify why our model, despite being trained on matched text-video pairs, is capable of measuring uncertainty when handling queries in an open-world setting, specifically for Video Temporal Grounding (VTG). The reasons for this capability are as follows:\\n\\n1. **Introduction of Deep Evidential Regression (DER):** We leverage the Deep Evidential Regression (DER) technique, which learns a second-order distribution over the Gaussian parameters. This means that the model, when fitted to the training data, learns a higher-order distribution that allows it to perceive differences between new input samples (such as unmatched pairs) and the matched pairs seen during training. As a result, the model can express the reliability of its predictions through uncertainty measurements.\\n2. **Challenges in Utilizing DER for Explicit Uncertainty Measurement in VTG:** Despite the effectiveness of DER, directly applying it to VTG as a baseline for explicit uncertainty measurement presents two challenges:\\n - **Modality Imbalance:** There is a mismatch in the model's sensitivity to anomalous videos and queries. The model shows different levels of sensitivity to such outliers, which can lead to biased uncertainty measurements.\\n - **Bias in Uncertainty Estimation** We observed that the uncertainty estimated by the model during inference does not always adhere to the expected behavior: **\\\"correct predictions should have low uncertainty, and incorrect predictions should have high uncertainty.\\\"** This suggests that the evidence and uncertainty learned by the model during training are misaligned, which in turn affects the higher-order distribution and leads to unreliable uncertainty estimates at inference time.\\n\\nTo address these issues, we adopted a more refined modal alignment strategy and made structural improvements to the regularizer used in the vanilla DER (introducing a geom-regularizer). These changes ensure that the model accurately recognizes matched samples and can effectively distinguish unmatched samples with substantial differences.\\n\\nThrough extensive experiments, we validate our uncertainty measurement method. On one hand, we found that this approach unexpectedly improves the model's performance on downstream tasks, likely due to its enhanced ability to perceive uncertainty. On the other hand, the model demonstrates an explicit ability to measure location bias and handle anomalous inputs in the open-world setting, thus achieving the intended design goal. Regarding the details above, we have further elaborated on them in the supplementary discussion titled **\\u201cSupplementary Discussion (Addressing Weakness 3)\\u201d**, which was provided yesterday. \\nWe would like to invite you to review it and let us know if you have any further questions. If we have successfully addressed your concerns, we would greatly appreciate your positive response at your convenience.\"}",
"{\"comment\": \"Dear reviewer FPPU:\\n\\nWith the discussion stage ending soon, we want to kindly follow up to check if our response has addressed your questions and concerns. If yes, would you kindly consider raising the score before the discussion phase ends\\uff1f We are very grateful for your time and effort\\uff01\"}",
"{\"comment\": \"Dear reviewer,\\n\\nWe wonder if our response answers your questions and addresses your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!\"}",
"{\"summary\": \"This paper studies the issue of open-world challenges caused by open-vocabulary queries and out-of-distribution videos in video temporal grounding. The authors adopt the Deep Evidential Regression as baseline, and propose a Reflective Flipped Fusion block to realize modality alignment and query reconstruction. Meanwhile, a Geom-regularizer is proposed to debias and calibrate uncertainty estimation. Extensive experiments are conducted on the public dataset to validate the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper extends the deep evidential regression to video temporal grounding for uncertainty estimation.\\n2. The authors propose a Geom-regularizer to solve the counterintuitive uncertainty and calibrate the estimation of uncertainty. \\n3. The proposed method achieves comparable performance in the majority of benchmarks.\", \"weaknesses\": \"1. The evaluation of location bias is insufficient. There are no transfer experiments on the Charades-CD and ActivityNet-CD datasets to validate the model in OOD scenarios, as done by MomentDETR and MomentDiff.\\n2. The study of query reconstruction (QR) is not thorough. The authors only present performance across different QR epochs and learning rates.\\n3. Insufficient performance evaluation. Ego4D-NLQ is widely used in previous works, yet this study does not report results on this dataset. Additionally, the paper fails to compare with recent works, such as \\\"R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding\\\" from ECCV 2024.\", \"questions\": \"1. Why does the model only mask and reconstruct one noun? Would masking more words help enhance text sensitivity?\\n2. In the conclusion, the authors claim that the model\\u2019s capabilities are limited by data quality and scale. ActivityNet-Captions and Ego4D-NLQ are large-scale datasets. Would the model perform well on these two datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Due to the lack of substantial theoretical and experimental analysis, with only intuitive and vague responses, this question has not been adequately addressed.\"}",
"{\"comment\": \"Thanks for your reply. This question has been addressed.\"}",
"{\"title\": \"1. Supplementing the evaluation of location bias (Addressing Weakness 1)\", \"comment\": \"We have provided additional results for DDM-VTG on the ActivityNet-CD and Charades-CD datasets, including downstream performance and average uncertainty metrics (**see Answer 5 to Reviewer QN6J**). Additionally, we report the average uncertainty of all samples across in-domain (iid) and out-of-domain (ood) test sets for the two CD datasets. The results demonstrate that uncertainty for ood samples is significantly higher than for iid samples, indicating that the uncertainty estimation by DDM-VTG is generally reasonable.\\n| | iid | ood (CD) |\\n| --- | --- | --- |\\n| ActivityNet-CD | 0.09 | 0.14 |\\n| Charades-CD | 0.03 | 0.11 |\"}",
"{\"title\": \"Supplementary discussion (Addressing Weakness 3)\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion phase is nearing its end, we noticed that we have not yet received a response from you. To address any potential points of confusion and provide further clarification, we have revisited and reorganized our discussion on this issue.\\n\\n1. **Addressing the Challenges in Figure 2:**\\nWe have conducted a series of quantitative experiments involving VTG downstream tasks (Tables 1, 2, and additional results, such as Answer 5 to **Reviewer yg5S** comparing performance on large-scale datasets). Our experimental findings show that the proposed method achieves competitive performance relative to recent state-of-the-art VTG methods. This suggests that the uncertainty measurement approach itself is meaningful. While the primary focus of this study is on measuring uncertainty in VTG inference, we are pleasantly surprised to observe that our method also leads to improvements in VTG task performance. We believe this enhancement stems from the fact that modeling uncertainty strengthens the model\\u2019s ability to extract important information from the video.\\n2. **Evaluating Uncertainty Measurement Across Modalities:**\\nTo further validate that our uncertainty measurement method can effectively capture uncertainty introduced by the text and image modalities, we conducted adversarial experiments. By introducing varying levels of noise into the text and image modalities, we observed the resulting changes in uncertainty during inference. This setup directly simulates the challenges described in Figure 2. The uncertainty distributions (shown in Figure 6) indicate that our method is highly sensitive to adversarial samples in both modalities, which helps address the modality imbalance issue present in the baseline model.\\n3. **Addressing the Issue of Video Length Distribution:**\\nAs noted, videos of varying lengths are common in real-world applications. Reference [1] discusses how current VTG datasets are affected by the uneven distribution of video lengths, leading to potential bias since models often focus on specific time intervals where most localization occurs. To address this, we visualized the uncertainty inference results on QVHighlight (Figure 8) and conducted ablation studies. The results demonstrate that our method can effectively capture this issue through uncertainty measurement, with the model exhibiting significantly higher uncertainty for less frequent localization results. This further highlights that DDM-VTG exhibits strong capabilities in handling out-of-distribution (OOD) samples and mitigating location bias.\\n \\n In response to the concerns raised by **Reviewer QN6J** and **Reviewer yg5S** regarding the generalization of DDM-VTG, we have added experiments on ActivityNet-CD and Charades-CD as suggested (please refer to Answer 5 to Reviewer QN6J). In these datasets, the IID samples represent data processed to address location bias, while the OOD samples are unprocessed. The results show that our method outperforms the baseline in OOD inference and achieves a smaller **IID-OOD Gap**, indicating that DDM-VTG is robust to location bias.\\n \\n4. **Qualitative Case Studies:**\\nWhile we have presented numerous quantitative results, our primary goal is to demonstrate that the model can make robust inferences even when confronted with clearly anomalous inputs, as outlined in Figure 1. To visually demonstrate this ability, we have included a series of case studies (**Figure 11 in Appendix C.3**, and **Figures 13\\u201316 in Appendix C.5**). These cases cover the challenges mentioned in Figure 2(a), (b), and (d). Since existing VTG methods do not explicitly quantify uncertainty to address these challenges, we chose UniVTG as a representative baseline for comparison. The case-by-case comparisons and analyses have been provided in the **rebuttal.zip**. These examples confirm that our model is able to sensibly and accurately express the uncertainty involved in inferences when facing these challenges.\\n \\n However, due to the inherent limitations of the model's knowledge capacity, we observed that the model's inference on extreme samples still lacks fine-grained characterization. For example, similar extreme OOD samples tend to consistently show extreme uncertainty values, such as 0.99 or 1.00. We sincerely hope to further improve this aspect in future work, allowing the model to produce more nuanced and reliable uncertainty estimates, which would lead to more trustworthy VTG inferences.\\n \\n\\n[1] Yuan, Yitian, et al. A closer look at temporal sentence grounding in videos: Dataset and metric.\\u00a0*Proceedings of the 2nd international workshop on human-centric multimedia analysis*. 2021.\"}",
"{\"summary\": \"This paper proposes Debiased DER Model for VTG, tackling open-vocabulary queries and out-of-distribution videos in video temporal grounding tasks. It extends the vanilla DER to VTG and establishes a baseline. To address two critical biases in the baseline\\u2014modality imbalance and counterintuitive uncertainty\\u2014the method incorporates a RFF block for progressively enhancing modal alignment, a query reconstruction task to ensure robust cross-modal alignment capabilities and a Geom-regularizer to calibrate uncertainty estimation. The proposed method has been evaluated on 4 datasets, demonstrating its effectiveness in Moment Retrieval, Highlight Detection and Video Summarization. The ablation studies also support the analysis.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The basic idea is easy to follow and the main motivation is clear.\", \"The innovative integration of DER into VTG tasks is a novel approach that effectively addresses key issues like OOD videos.\", \"The proposed method achieves strong experiment results, both compared to its baseline and other SOTA methods.\"], \"weaknesses\": [\"In figure3, I can\\u2019t see the difference between the two distributions except for the color, which might be confusing as to why one is unreliable and the other is trustworthy.\", \"About the presentation. In 4.3, there is a significant disparity in the level of detail explained for different modules, perhaps the arrangement of content in the main text and appendix could be adjusted to make it clearer for readers.\", \"The experimental section only shows the comparison with SOTA methods on various metrics. In the appendix, only some cases of the QVHighlights dataset are shown, without visual results for the other datasets mentioned in the paper, and it also lacks displays of comparative results for the three sub-tasks.\", \"It would be more complete to have a discussion of this increased cost if there are any, as well as techniques used to overcome it.\", \"(Minor) Minor typos/grammatical mistakes (e.g. 4.2 \\u201cVALLINA\\u201d)\"], \"questions\": [\"In Figure 2, several challenges within VTG tasks are highlighted, but it appears that targeted comparative experiments were not conducted in the study. When compared with other works, can DDM-VTG perform better in addressing these challenges? Some discussions are expected.\", \"In the Query Reconstruction task, how can DDM-VTG ensure that the tokens predicted by the QR head are accurate when dealing with complex videos? What happens if the predictions are incorrect? Does it affect the accuracy of temporal localization of the whole video?\", \"In the case study, the average length of the videos is 150 seconds. How would the model perform with longer videos, and would the cost increase significantly?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"2. On the study of the QR task (Addressing Weakness 2)\", \"comment\": \"The improvements introduced by the QR task to baseline methods, as well as its impact on downstream performance, are presented in Table 3(a) and Figure 6 of the main text. Additionally, we conducted experiments on the effect of `mask_ratio` for QR on the QVHighlight dataset, as shown below:\\n| mask_ratio | w/o. mlm | 1 noun | 0.25 | 0.5 | 0.75 | all noun |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| mAP| 32.75 | **36.93** | 33.40 | 32.43 | 31.80 | 31.71 |\\n| R1@0.5 | 57.23 | **64.06** | 61.74 | 59.55 | 59.23 | 58.52 |\\n| R1@0.7 | 37.11 | **43.61** | 39.17 | 37.13 | 34.26 | 34.45 |\\n\\nFrom these results, we observe that QR affects the model\\u2019s ability to align text and video modalities. When the `mask_ratio` is very high, the model struggles to answer QR tasks correctly due to insufficient text context, making it difficult to align video clues. We also experimented with masking nouns, as they carry significant semantic information in VTG tasks. When all nouns are masked, the text-video alignment performance deteriorates significantly. Conversely, masking only one noun enables the model to leverage multi-modal context to achieve better cross-modal alignment.\\n\\nNotably, while QR\\u2019s primary goal is to address modality imbalance in uncertainty estimation, it also positively impacts downstream performance when properly configured.\"}",
"{\"title\": \"3. On Whether DER Corrects Model Predictions\", \"comment\": \"We introduce DER to **provide reliable uncertainty estimation in VTG**, though exploring how the model leverages this uncertainty to refine its predictions is not the primary focus of our work. Interestingly, our proposed approach yields notable downstream performance improvements as a serendipitous benefit. These enhancements complement the performance comparison between DDM-VTG and the baseline on downstream tasks, as summarized in Table 1 of the original manuscript. Below is the performance comparison across QVHighlight and Charades datasets:\\n| **Method** | **QVH-MR R@0.5** | **QVH-MR R@0.7** | **QVH-MR Avg.M** | **QVH-HD MAP** | **QVH-HD HIT@1** | **Charades-STA R@0.5** | **Charades-STA R@0.7** | **Charades-STA mIoU** |\\n|:-------------------:|:----------------:|:----------------:|:----------------:|:--------------:|:-----------------:|:----------------------:|:----------------------:|:---------------------:|\\n| **Baseline (ours)** | 56.8 | 39.2 | 35.3 | 39.8 | 62.2 | 54.7 | 35.4 | 48.6 |\\n| **DDM-VTG (ours)** | **65.0** | **49.4** | **43.0** | **40.1** | **63.4** | **60.2** | **38.0** | **51.6** |\"}",
"{\"title\": \"On inference for longer videos (Addressing Question 3)\", \"comment\": \"To the best of our knowledge, there is no existing benchmark specifically designed to evaluate the performance of VTG models on test sets with varying video lengths, particularly for longer videos. To address this, we created our own benchmark based on QVHighlight: We randomly select `x` videos from the corresponding splits and concatenate with the original video with a random order, where `x` takes value from 0 to 5. Which means we create 6 datasets (with 6 length levels ), from 150 seconds to 15 minutes. We controlled for the same number of training epochs (epoch = 180) to obtain the experimental results and training durations. And 4 Tesla V100-32G are applied.\\n\\n| | x=0 | x=1 | x=2 | x=3 | x=4 | x=5 |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| MR-R1@0.3 | 76.45 | 71.94 | 68.90 | 67.74 | 65.94 | 59.61 |\\n| MR-R1@0.5 | 64.77 | 59.81 | 57.42 | 55.48 | 55.35 | 46.06 |\\n| MR-R1@0.7 | 45.68 | 38.58 | 36.77 | 35.74 | 36.77 | 27.10 |\\n| MR-R5@0.3 | 90.65 | 87.29 | 86.32 | 84.45 | 83.94 | 77.23 |\\n| MR-mAP | 40.09 | 35.10 | 33.20 | 32.54 | 33.54 | 26.68 |\\n| HL-mAP | 39.85 | 38.56 | 26.8 | 19.82 | 16.57 | 19.83 |\\n| HL-Hit1 | 63.74 | 61.48 | 41.29 | 28.52 | 23.16 | 22.84 |\\n| Training Time | t | 1.13t | 1.18t | 1.23t | 1.31t | 1.46t |\\n\\nThe results indicate that as the video length increases, the model's downstream performance gradually deteriorates under the same number of epochs. This degradation is likely related to underfitting and the increased complexity of the contextual information that the model needs to process. However, we observed that the time required to run the model did not increase significantly when using the same number of epochs. Specifically, for x=5 (15-minute, micro-film-length videos), the runtime was only 1.46 times longer than that for x=0, with the same number of epochs.\\n\\nAdditionally, we included zero-shot inference results of models trained on the original video length, directly applied to the constructed long video dataset\\u2019s val splits. And only a single Tesla V100-32G is applied.\\n\\n| | x=0 | x=1 | x=2 | x=3 | x=4 | x=5 |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| MR-R1@0.3 | 76.45 | 67.94 | 61.35 | 51.35 | 41.03 | 35.29 |\\n| MR-R1@0.5 | 64.77 | 50.77 | 36.13 | 24.97 | 18.00 | 13.10 |\\n| MR-R1@0.7 | 45.68 | 26.58 | 15.35 | 10.05 | 7.83 | 5.03 |\\n| MR-R5@0.3 | 90.65 | 71.68 | 65.87 | 56.97 | 46.84 | 40.13 |\\n| MR-mAP | 40.09 | 23.96 | 15.53 | 10.55 | 7.23 | 5.36 |\\n| HL-mAP | 39.85 | 37.49 | 26.47 | 19.29 | 15.84 | 13.51 |\\n| HL-Hit1 | 63.74 | 68.90 | 41.48 | 27.55 | 23.23 | 18.90 |\\n| Inference Time | t | 1.84t | 2.54t | 3.31t | 4.23t | 4.61t |\\n\\nWe can observe that the model is affected to some extent by the more complex contextual information and Temporal OOD. In the case of zero-shot inference, its performance significantly degrades when applied to `x`=3, 4, and 5, which have longer video lengths. This suggests that directly applying a model trained on shorter videos (150s) to inference on longer videos results in noticeable downstream performance degradation, with this degradation becoming more pronounced as the video length increases. However, we observed that the inference time generally increases linearly with video length. Specifically, when performing inference on a 900s video, the inference time is 4.61 times longer than for a 150s video, which seems reasonable.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"1. Discussion on the Limitations of DDM-VTG\", \"comment\": \"Due to model knowledge capacity and dataset constraints, DDM-VTG is unable to provide more fine-grained uncertainty estimates for extreme out-of-distribution (OOD) samples. Specifically, in our experiments, we observed that the uncertainty distribution estimated by DDM-VTG for varying degrees of OOD samples is not sufficiently uniform. For example, similar extreme OOD samples are consistently estimated with extreme uncertainty values such as 0.99 or 1.00. We plan to address this issue in future work through targeted optimization.\"}",
"{\"comment\": \"Dear reviewer:\\n\\nWith the discussion stage ending soon, we wanted to kindly follow up to check if our response has addressed your questions and concerns. If yes, would you kindly consider raising the score before the discussion phase ends\\uff1f We are truly grateful for your time and effort\\uff01\"}",
"{\"comment\": \"Dear reviewer:\\n\\nWith the discussion stage ending soon, we wanted to kindly follow up to check if our response has addressed your questions and concerns. If yes, would you kindly consider raising the score before the discussion phase ends\\uff1f We are truly grateful for your time and effort\\uff01\"}",
"{\"comment\": \"Dear reviewer,\\n\\nAs the discussion stage, which has been extended by the ICLR committee, is ending soon, we wonder if our response answers your questions and addresses your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!\"}",
"{\"metareview\": \"This work introduces a new model, DDM-VTG, that integrates Deep Evidential Regression into Video Temporal Grounding in order to handle uncertainties in open-world scenarios. The paper got diverse recommendations (two acceptance and two rejection). Though there are merits in this paper, the work has the following weaknesses which are raised by reviewers: (1) lack of substantial theoretical and experimental analysis (2) performance improvement is not obvious (3) there are concerns about the used open world datasets. AC appreciates the contributions of this work, but still think the current version is not ready for publication at this top conference.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided additional experiments and analysis, and provided the motivations. However, the replies have not fully addressed reviewers' concerns.\"}",
"{\"title\": \"2. On the Performance in Video Summarization (Addressing Weakness 2)\", \"comment\": \"We acknowledge that the results reported in Table 2 for video summarization, while decent, may appear less competitive due to the following reasons:\\n\\n1. **Domain-Specific Challenges in TVSum**\\n \\n TVSum includes videos from 10 distinct domains. Although often treated as \\\"a single dataset,\\\" prior works have adopted a domain-specific tuning strategy to optimize results separately for each domain. For example, UniVTG[1] explicitly states in Table 1 of its appendix that hyperparameters were chosen via a \\\"search\\\" process. From our observations, using different hyperparameters for different domains can significantly impact performance. While we could employ similar domain-specific optimizations to achieve higher results, we chose to treat TVSum as a single dataset with a shared hyperparameter set to derive a **meaningful average metric**. As shown in Table 2 (Column 11), this approach achieves a 3.6% improvement over UniVTG, which employs domain-specific tuning.\\n \\n2. **Focus on Uncertainty Estimation, Not Downstream Tasks**\\n \\n The primary focus of this work is improving uncertainty estimation in VTG through innovations such as the RFF block, QR task, and geom-regularizer. The downstream task results reported in Table 2 aim to demonstrate that our approach provides solid multimodal understanding and VTG capabilities, ensuring the reliability of our core uncertainty-related discussions. While downstream performance improvements are a welcome byproduct of these innovations, they are not the central goal of our work.\\n\\n[1] Lin K Q, Zhang P, Chen J, et al. UniVTG: Towards unified video-language temporal grounding, ICCV. 2023\"}",
"{\"title\": \"4. On DDM-VTG\\u2019s Uncertainty Estimation for Unreasonable Text Queries (Addressing Question 1)\", \"comment\": \"We evaluate DDM-VTG\\u2019s ability to estimate uncertainty for unreasonable text queries through both qualitative and quantitative experiments:\\n\\n- **Qualitative Analysis:**\\n \\n Figure 13 presents a representative case study involving mismatched text queries (lines 1162\\u20131176).\\n \\n- **Quantitative Analysis:**\\n \\n Figure 6 simulates unreasonable text queries by replacing a proportion of tokens in the original text with random tokens from other text queries in the batch. This provides a controlled evaluation of DDM-VTG\\u2019s uncertainty estimation for unreasonable or incoherent queries.\"}",
"{\"title\": \"4. On Modality Imbalance\", \"comment\": \"In Table 3(a), we report ablation experiments demonstrating the effectiveness of our method in addressing modality imbalance by introducing metrics such as $\\\\mathrm{Var_{vis}}$ and $\\\\mathrm{Var_{text}}$. We provide detailed explanations of these metrics and their purposes in lines 386-389 and 410-418, along with an analysis of the ablation results. Additionally, Figure 6 offers empirical evidence of DDM-VTG's ability to alleviate modality imbalance, as discussed in lines 452-470.\"}",
"{\"title\": \"3. Additional datasets and visual results (Addressing Weakness 3)\", \"comment\": \"Cases on additional datasets will be uploaded to the supplementary material (**rebuttal.zip**). Results on other sub-tasks have been provided in **Table 1&2** of the main text.\"}",
"{\"title\": \"5. On Performance on Charades-CD and ActivityNet-CD\", \"comment\": \"We supplement the evaluation of **DDM-VTG** on Charades-CD and ActivityNet-CD (ANet-CD), reporting both downstream performance and metrics such as average uncertainty. The results are summarized below:\\n\\n| Method | Charades-CD R1@0.3 | Charades-CD R1@0.5 | Charades-CD R1@0.7 | Charades-CD MAP_avg | ANet-CD R1@0.3 | ANet-CD R1@0.5 | ANet-CD R1@0.7 | ANet-CD MAP_avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| **MMIN (iid)** | - | - | - | - | - | - | - | - |\\n| **MMIN (ood)** | 55.91 | 34.56 | 15.84 | 15.73 | 44.13 | 24.69 | 12.22 | 15.06 |\\n| ***\\u0394 \\u2193*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** |\\n| **MomentDETR (iid)** | - | - | - | - | - | - | - | - |\\n| **MomentDETR (ood)** | 57.34 | 41.18 | 19.31 | 18.95 | 39.98 | 21.30 | 10.58 | 12.19 |\\n| ***\\u0394 \\u2193*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** |\\n| **CM-NAT[1] (iid)** | 64.21 | 53.82 | 34.47 | - | 49.91 | 41.67 | 28.82 | - |\\n| **CM-NAT[1] (ood)** | 52.21 | 39.86 | 21.38 | - | 32.32 | 20.78 | 11.03 | - |\\n| ***\\u0394 \\u2193*** | ***12.00*** | ***13.96*** | ***13.09*** | ***-*** | ***17.59*** | ***20.89*** | ***17.79*** | ***-*** |\\n| **MomentDiff (iid)** | - | - | - | - | - | - | - | - |\\n| **MomentDiff (ood)** | 67.73 | 47.17 | 22.98 | 22.76 | 45.54 | 26.96 | 13.69 | 16.38 |\\n| ***\\u0394 \\u2193*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** | ***-*** |\\n| **Ours (iid)** | 71.10 | 62.20 | 43.29 | 43.41 | 56.33 | 41.77 | 27.47 | 27.68 |\\n| **Ours (ood)** | 67.81 | 52.46 | 30.97 | 32.80 | 41.64 | 23.76 | 16.89 | 17.49 |\\n| ***\\u0394 \\u2193*** | ***3.29*** | ***9.74*** | ***12.32*** | ***10.61*** | ***14.69*** | ***18.01*** | ***10.58*** | ***10.19*** |\\n---\\n### Key Insights:\\n\\n1. **Superior OOD Performance:**\\n \\n Our method demonstrates strong generalization in out-of-distribution (OOD) settings. For example, on Charades-CD, DDM-VTG achieves a **10.04% higher mAP** than MomentDiff and **13.85% higher mAP** than MomentDETR. This performance highlights DDM-VTG's robustness under distributional shifts.\\n \\n2. **Small IID-OOD Gap:**\\n \\n The performance gap between IID and OOD splits for DDM-VTG is significantly smaller than that of CM-NAT (e.g., ***3.29% vs. 12.00%*** at Charades-CD R1@0.3). This suggests that DDM-VTG effectively learns generalized features that reduce susceptibility to dataset biases.\\n \\n3. **Generalization to Challenging Scenarios:**\\n \\n The reduced IID-OOD performance gap, coupled with competitive results across all metrics, underscores DDM-VTG's ability to combat dataset biases and adapt to OOD scenarios more effectively than existing methods.\\n\\n[1] Lan, Xiaohan, et al. \\\"Curriculum multi-negative augmentation for debiased video grounding.\\\" AAAI 2023.\"}",
"{\"title\": \"2. On content distribution in the main text (Addressing Weakness 2)\", \"comment\": \"Considering the page limit, we slightly reduced the description of the model architecture in the main text. We will optimize the layout and provide further refinements later.\"}",
"{\"comment\": \"There is still a lack of analytical and exploratory discussion regarding the experimental results.\"}",
"{\"title\": \"5. On the Definition of Prediction Accuracy in the Geom-Regularizer (Addressing Question 2)\", \"comment\": \"We define prediction accuracy using a simple error metric, as detailed in **line 184**. In the context of the geom-regularizer, where accuracy must lie between 0 and 1, we apply a straightforward normalization technique. This is described in **lines 301\\u2013310** and further elaborated in **Appendix B.2**.\"}",
"{\"title\": \"On addressing the challenges shown in Figure 2 (Addressing Question 1)\", \"comment\": \"Thanks for your point. In response, we have carefully revisited all the relevant experimental results related to Figure 2 and reorganized our discussion to provide a clearer and more thorough analysis.\\n\\n1. **Addressing the Challenges in Figure 2:**\\nWe have conducted a series of quantitative experiments involving VTG downstream tasks (Tables 1, 2, and additional results, such as Answer 5 to **Reviewer yg5S** comparing performance on large-scale datasets). Our experimental findings show that the proposed method achieves competitive performance relative to recent state-of-the-art VTG methods. This suggests that the uncertainty measurement approach itself is meaningful. While the primary focus of this study is on measuring uncertainty in VTG inference, we are pleasantly surprised to observe that our method also leads to improvements in VTG task performance. We believe this enhancement stems from the fact that modeling uncertainty strengthens the model\\u2019s ability to extract important information from the video.\\n2. **Evaluating Uncertainty Measurement Across Modalities:**\\nTo further validate that our uncertainty measurement method can effectively capture uncertainty introduced by the text and image modalities, we conducted adversarial experiments. By introducing varying levels of noise into the text and image modalities, we observed the resulting changes in uncertainty during inference. This setup directly simulates the challenges described in Figure 2. The uncertainty distributions (shown in Figure 6) indicate that our method is highly sensitive to adversarial samples in both modalities, which helps address the modality imbalance issue present in the baseline model.\\n3. **Addressing the Issue of Video Length Distribution:**\\nAs noted, videos of varying lengths are common in real-world applications. Reference [1] discusses how current VTG datasets are affected by the uneven distribution of video lengths, leading to potential bias since models often focus on specific time intervals where most localization occurs. To address this, we visualized the uncertainty inference results on QVHighlight (Figure 8) and conducted ablation studies. The results demonstrate that our method can effectively capture this issue through uncertainty measurement, with the model exhibiting significantly higher uncertainty for less frequent localization results. This further highlights that DDM-VTG exhibits strong capabilities in handling out-of-distribution (OOD) samples and mitigating location bias.\\n \\n In response to the concerns raised by **Reviewer QN6J** and **Reviewer yg5S** regarding the generalization of DDM-VTG, we have added experiments on ActivityNet-CD and Charades-CD as suggested (please refer to Answer 5 to Reviewer QN6J). In these datasets, the IID samples represent data processed to address location bias, while the OOD samples are unprocessed. The results show that our method outperforms the baseline in OOD inference and achieves a smaller **IID-OOD Gap**, indicating that DDM-VTG is robust to location bias.\\n \\n4. **Qualitative Case Studies:**\\nWhile we have presented numerous quantitative results, our primary goal is to demonstrate that the model can make robust inferences even when confronted with clearly anomalous inputs, as outlined in Figure 1. To visually demonstrate this ability, we have included a series of case studies (**Figure 11 in Appendix C.3**, and **Figures 13\\u201316 in Appendix C.5**). These cases cover the challenges mentioned in Figure 2(a), (b), and (d). Since existing VTG methods do not explicitly quantify uncertainty to address these challenges, we chose UniVTG as a representative baseline for comparison. The case-by-case comparisons and analyses have been provided in the **rebuttal.zip**. These examples confirm that our model is able to sensibly and accurately express the uncertainty involved in inferences when facing these challenges.\\n \\n However, due to the inherent limitations of the model's knowledge capacity, we observed that the model's inference on extreme samples still lacks fine-grained characterization. For example, similar extreme OOD samples tend to consistently show extreme uncertainty values, such as 0.99 or 1.00. We sincerely hope to further improve this aspect in future work, allowing the model to produce more nuanced and reliable uncertainty estimates, which would lead to more trustworthy VTG inferences.\\n \\n\\n[1] Yuan, Yitian, et al. A closer look at temporal sentence grounding in videos: Dataset and metric.\\u00a0*Proceedings of the 2nd international workshop on human-centric multimedia analysis*. 2021.\"}",
"{\"comment\": \"Thank you for your response, which addresses some of my questions. I have increased my score to 6.\"}",
"{\"title\": \"1. On Whether the Dataset is Open-World (Addressing Weakness 1)\", \"comment\": \"We argue that the datasets used in our work are open-world in nature. The majority of our evaluations were conducted on QVHighlights, which employs free-text annotations for video segments rather than restricting to a finite set of categories. Moreover, QVHighlights encompasses diverse video types, including news and vlogs, with no constraints on objects, environments, actions, or domains. Thus, we consider QVHighlights a quintessential open-world dataset, and we performed extensive and comprehensive experiments on it.\\n\\nAdditionally, our experiments included other datasets, such as TACoS and Charades-STA. While these datasets impose certain domain-specific restrictions (e.g., TACoS focuses on kitchen scenarios and Charades-STA on indoor activities), they still exhibit open-world characteristics in other aspects, such as free-text annotations, unconstrained objects, and actions. Therefore, despite some domain limitations, we believe these datasets maintain an open-world nature.\"}",
"{\"comment\": \"Dear reviewer,\\n\\nWe wonder if our response answers your questions and addresses your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!\"}",
"{\"summary\": \"It presents DDM-VTG, a new model that integrates Deep Evidential Regression into Video Temporal Grounding to handle uncertainties in open-world scenarios. It addresses modality imbalance and counterintuitive uncertainty with a Reflective Flipped Fusion block and a Geom-regularizer, enhancing model robustness and effectiveness across benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. It introduces the first extension of Deep Evidential Regression (DER) to Video Temporal Grounding (VTG) tasks, aiming to address uncertainties in open-world scenarios.\\n2. It proposes a Debiased DER Model (DDM-VTG) that tackles modality imbalance and counterintuitive uncertainty through a Reflective Flipped Fusion block and a Geom-regularizer, enhancing the model's sensitivity to text queries and calibrating uncertainty estimation.\", \"weaknesses\": \"1. The datasets used are not open-world.\\n2. The performance on the video summarization task is not advantageous enough.\\n3. Figure 2 shows 4 cases of the uncertainty. It is not clear how the method addresses (a)(b)(d) and how to evaluate if the methods can handle these scenarios.\", \"questions\": \"1. Since the datasets have annotations like a matched moment and text, how to evaluate the model's ability to learn uncertainty when processing an unreasonable text query? Like the example in Figure 1\\n2. In Geom-regularization, how to define accurate predictions? how to define less accurate predictions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"2. On the Generalization of DDM-VTG to Other Video-Related Tasks\", \"comment\": \"In Tables 1 and 2, we present the evaluation results of DDM-VTG on three video-related tasks: Moment Retrieval, Highlight Detection, and Video Summarization. Our work primarily focuses on uncertainty estimation for **multi-modal regression tasks**, and we believe that DDM-VTG has potential for extension to other multi-modal regression tasks as well.\"}",
"{\"title\": \"3. On How DDM-VTG Addresses the Scenarios in Figure 2 and Evaluates Its Effectiveness (Addressing Weakness 3)\", \"comment\": [\"Figure 2 illustrates examples corresponding to the two types of uncertainty modeled by our approach: **epistemic uncertainty** (Figures 2a, 2b) and **aleatoric uncertainty** (Figures 2c, 2d). Our baseline framework already incorporates the modeling and measurement of these uncertainties, as detailed in Sections 3 and 4.2. Building on this, DDM-VTG strengthens modality interaction and uncertainty correction, enabling more robust uncertainty estimation in the challenging multimodal context of VTG tasks.\", \"**We provide comprehensive qualitative and quantitative experiments to demonstrate DDM-VTG\\u2019s capability in addressing these scenarios.**\", \"---\", \"### **Qualitative Experiments**\", \"We conduct extensive case studies to illustrate DDM-VTG's ability to estimate uncertainties effectively. These cases are presented in **Figure 11**, **Appendix C.3**, and **Figures 13\\u201316 (Appendix C.5)**. Specifically:\", \"**Figure 2a (OOD Video):**\", \"Case 2 in Figure 11: OOD video with abnormal aspect ratios (lines 1089\\u20131095).\", \"Case 4 in Figure 11: Rare boundary annotations (lines 1103\\u20131109).\", \"Figures 14\\u201315: OOD video domains (lines 1190\\u20131220).\", \"**Figure 2b (Semantic Ambiguity):**\", \"Case 6 in Figure 11: Visual blur caused by a \\\"plastic container\\\" (lines 1117\\u20131122).\", \"Figure 13: Tiny objects causing visual blur (lines 1162\\u20131176).\", \"Figure 16: Abstract textual expressions causing semantic ambiguity (lines 1225\\u20131239).\", \"**Figure 2d (Low-Level Feature Variations):**\", \"Case 3 in Figure 11: Scene transitions (lines 1096\\u20131102).\", \"Case 5 in Figure 11: Lighting condition changes (lines 1110\\u20131116).\", \"---\", \"### **Quantitative Experiments**\"], \"we_simulate_the_scenarios_in_figure_2_using_controlled_perturbations_and_measure_the_resulting_uncertainty_distributions_on_qvhighlights\": [\"**Simulating OOD and Ambiguities:**\", \"Figure 6 shows how uncertainty distributions change when varying levels of Gaussian noise are added to videos (simulating OOD scenarios in Figure 2a, visual semantic ambiguity in Figure 2b, and low-level feature variations in Figure 2d) or replacing different proportions of tokens in text (simulating text semantic ambiguity in Figure 2b).\", \"**These experiments, both qualitative and quantitative, validate DDM-VTG\\u2019s capability to robustly address the diverse uncertainty scenarios outlined in Figure 2.**\"]}",
"{\"title\": \"6. On addressing the challenges shown in Figure 2 (Addressing Question 1)\", \"comment\": \"Existing VTG methods lack any explicit mechanisms to quantify uncertainty. As pioneers in this field, we established a baseline and, for the first time, achieved explicit uncertainty quantification. We further proposed DDM-VTG to debias this uncertainty quantification.\\n\\nQuantitative and qualitative results related to Figure 2 are provided in the main text (as noted in **Answer 3 to Reviewer MSwQ**). In addition, we have included quantitative results on the ActivityNet-CD and Charades-CD datasets to address the challenges outlined in Figure 2(a), as referenced in **Answer 5 to Reviewer QN6J** . Visual comparisons on additional datasets will be included in the supplementary material (**rebuttal.zip**).\"}",
"{\"comment\": \"Dear reviewer,\\n\\nThe discussion stage, which has been extended by the ICLR committee, is ending soon, we wonder if our response answers your questions and addresses your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!\"}",
"{\"title\": \"Summary of Revisions and Responses to Reviewer Feedback\", \"comment\": \"We sincerely thank all reviewers for their time and thoughtful feedback, as well as for recognizing the innovation in our work. The constructive comments have been invaluable in improving our paper.\\nIn this rebuttal, we have addressed the reviewers\\u2019 concerns by conducting additional experiments, including comparisons with SOTA methods, ablation studies, and parameter analysis. We have also performed qualitative case studies to provide deeper insights.\\nBeyond the official comments, we have revised Figure 3 based on Reviewer FPPU\\u2019s suggestions, corrected minor typographical errors (marked in blue in the revised PDF), and uploaded the updated version. Additionally, following Reviewer FPPU\\u2019s recommendation, we have included more detailed case studies in the supplementary file \\uff08**rebuttal.zip**).\\nWe welcome any further discussion and look forward to refining the paper further based on your feedback. Thank you again for your constructive and insightful reviews.\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nWe are grateful for your positive update. We sincerely appreciate your constructive feedback and valuable advice!\"}",
"{\"comment\": \"Thank you for your response. My concerns have been addressed. I am willing to raise my rating.\"}",
"{\"title\": \"More Quantitative Results on Figure 2(a)\", \"comment\": \"We provide supplementary quantitative results on the ActivityNet-CD and Charades-CD datasets to address the challenges outlined in Figure 2(a), as referenced in **Comment 5 to Reviewer QN6J**. Kindly feel free to review them.\"}",
"{\"comment\": \"Dear reviewer,\\n\\nWe wonder if our response answers your questions and addresses your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!\"}",
"{\"title\": \"3. Comparison with R2-Tuning (Addressing Weakness 3)\", \"comment\": \"R2-Tuning, a concurrent work, was not considered in our initial submission. However, our method demonstrates competitive downstream performance:\\n\\n| Method | QVHighlight R1@0.5 | QVHighlight R1@0.7 | QVHighlight HD mAP | Charades-STA R1@0.5 | Charades-STA R1@0.7 | Charades-STA mIoU | TACoS R1@0.5 | TACoS R1@0.7 | TACoS mIoU |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| R2-Tuning | **68.7** | **52.1** | **40.6** | 59.8 | 37.0 | 50.9 | **38.7** | **25.1** | **35.9** |\\n| DDM-VTG | 65.0 | 49.4 | 40.1 | **60.2** | **38.0** | **51.6** | 37.3 | 19.4 | 33.9 |\"}",
"{\"comment\": \"Dear reviewer MSwQ:\\n\\nWith the discussion stage ending soon, we want to kindly follow up to check if our response has addressed your questions and concerns. If yes, would you kindly consider raising the score before the discussion phase ends\\uff1f We are truly grateful for your time and effort\\uff01\"}",
"{\"title\": \"5. On the Impact of Data Quality and Scale on Method Performance (Regarding Question 2)\", \"comment\": \"We believe that the scale of the training dataset plays a crucial role in expanding the knowledge boundaries of the model, thereby improving the robustness of its uncertainty estimation. To evaluate this, we conducted experiments using 20%, 40%, 60%, 80%, and 100% of the training splits. We report both the VTG task performance and the uncertainty quantification metrics of the model outputs. As shown in the table below, increasing the data scale leads to consistent improvements in both VTG task performance and uncertainty quantification ability.\\n\\nTo assess the impact of data quality, we designed an adversarial experiment where the quality of data was intentionally degraded during inference. The results demonstrate that the model outputs higher uncertainty when encountering low-quality or abnormal data, confirming its ability to quantify uncertainty effectively. Details of this experiment can be found in **Answer 3 to Reviewer MSwQ**, including the cases presented in **Figure 11**, **Appendix C.3**, and **Figures 13\\u201316**.\\n\\n| **Training Set Scale** | **QVHighlights VTG Performance** | | **QVHighlights Uncertainty** | | **Charades-STA VTG Performance** | | **Charades-STA Uncertainty** | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | **mAP-MR** | **Hit1-HL** | **EUCM\\u2193** | **Entropy\\u2191** | **mAP-MR** | | **EUCM\\u2193** | **Entropy\\u2191** |\\n| **20%** | 25.67 | 57.94 | 0.3681 | 0.2649 | 33.02 | | 0.1639 | 0.0987 |\\n| **40%** | 33.60 | 61.42 | 0.2813 | 0.1795 | 37.05 | | 0.1620 | 0.2216 |\\n| **60%** | 37.25 | 63.16 | 0.2805 | 0.2388 | 39.68 | | 0.1444 | 0.3479 |\\n| **80%** | 38.82 | **64.06** | 0.3226 | **0.2983** | 40.15 | | 0.1596 | 0.2919 |\\n| **100%** | **41.15** | 64.00 | **0.2787** | 0.2457 | **41.43** | | **0.1440** | **0.3609** |\\n---\\nAlso, we appreciate the reviewer\\u2019s suggestion and have further supplemented our analysis with additional results on the **ActivityNet-Captions** and **Ego4D-NLQ** datasets. Since several prominent baselines do not report results on these datasets, we included comparisons with additional models. The results indicate:\\n\\n1. Our method demonstrates strong downstream task performance on both large-scale datasets.\\n2. Based on these findings, we plan to further explore larger-scale pretraining to improve both uncertainty estimation and downstream task performance.\\n\\n| **Dataset** | **Method** | **R1@0.3** | **R1@0.5** | **R1@0.7** |\\n| --- | --- | --- | --- | --- |\\n| **ActivityNet-Captions** | VLG-NET [1] | 46.32 | 29.82 | - |\\n| | UnLoc-large [2] | 48.30 | 30.20 | - |\\n| | **Ours** | **71.72** | **56.34** | **33.68** |\\n| **Ego4D-NLQ** | UniVTG [3] | 7.28 | 3.95 | 1.32 |\\n| | EgoVLP [4] | 10.46 | 6.24 | - |\\n| | **Ours** | **11.04** | **8.18** | **5.32** |\\n\\n[1] Soldan, Mattia, et al. VLG-Net: Video-Language Graph Matching Network for Video Grounding. ICCVW'21.\\n\\n[2] Yan, Shen, et al. Unloc: A unified framework for video localization tasks. ICCV'23.\\n\\n[3] Lin, Kevin Qinghong, et al. Univtg: Towards unified video-language temporal grounding. ICCV'23.\\n\\n[4] Lin, Kevin Qinghong, et al. Egocentric video-language pretraining. NeurIPS'22.\"}",
"{\"comment\": \"Dear Reviewer QN6J,\\n\\nThank you for the positive update. We greatly appreciate your time and consideration. If there are any remaining concerns that you would like further clarification on before the discussion stage ends, please feel free to let us know. We would be more than happy to address any further questions.\"}",
"{\"comment\": \"Dear reviewer,\\n\\nThanks again for your positive response. However, we noticed that the score has not yet been updated, so we wanted to kindly check if there is any issue or if additional actions are required on our end to facilitate the update.\"}",
"{\"title\": \"4. On computational cost (Addressing Weakness 4)\", \"comment\": \"Our method introduces minimal additional computational overhead. Specifically:\\n\\n - The inclusion of the RFF block, QR task, and DER with the Geom-Regularizer into the DDM-VTG framework has negligible runtime impact. Empirical measurements on an NVIDIA Tesla V100 GPU show consistency with the baseline model. \\n - From a theoretical perspective, the training process for DDM-VTG does not incur significant computational costs. As shown in Appendix A.2, the NLL loss for DER is expressed as: \\n\\n$$\\n\\\\begin{aligned} \\\\mathcal{L}^{\\\\text{NLL}}_i &= \\\\frac{1}{2} \\\\log \\\\left( \\\\frac{\\\\pi}{\\\\nu} \\\\right) - \\\\alpha \\\\log(\\\\Omega) + \\\\left( \\\\alpha + \\\\frac{1}{2} \\\\right) \\\\log \\\\left( (b_i - \\\\gamma)^2 \\\\nu + \\\\Omega \\\\right) + \\\\log \\\\left( \\\\frac{\\\\Gamma(\\\\alpha)}{\\\\Gamma \\\\left( \\\\alpha + \\\\frac{1}{2} \\\\right)} \\\\right) \\\\end{aligned}\\n$$\\n\\n - Each term in the NLL loss has a temporal complexity of O(1). For a training set of N samples, computing the loss for both left and right boundaries results in a total complexity of O(N). \\n - Similarly, the spatial complexity of the Geom-Regularizer is O(1), as it does not require additional storage. Hence, the overall complexity of our proposed DER + Geom-Regularizer framework is \\\\(O(N)\\\\), consistent with the baseline model.\"}",
"{\"comment\": \"Dear reviewer:\\n\\nWith the discussion stage ending soon, we wanted to kindly follow up to check if our response has addressed your questions and concerns. If yes, would you kindly consider raising the score before the discussion phase ends\\uff1f We are truly grateful for your time and effort\\uff01\"}",
"{\"title\": \"7. On the impact of QR on downstream performance (Addressing Question 2)\", \"comment\": \"The QR task is an auxiliary task designed to enhance text-video modality understanding during training and mitigate the modality imbalance observed in baseline methods when estimating uncertainty. Inspired by \\\"mask-and-reconstruct\\\" approaches like MAE and BERT, the QR task masks parts of the input text and reconstructs it during training. However, during inference, the QR head is not used for text reconstruction. Instead, the model is provided with the full text, so there is no issue of \\\"incorrect reconstructed query\\\" during inference.\\n\\nThe improvements introduced by the QR task to baseline methods, as well as the accompanying downstream performance gains, are presented in Table 3(a) and Figure 6 of the main text. Additionally, we have conducted experiments on the effect of the `mask_ratio` for QR on the QVHighlight dataset, and you can refer to Answer 2 to Reviewer yg5S.\"}",
"{\"title\": \"4. On masking a single noun (Addressing Question 1)\", \"comment\": \"We have experimented with randomly masking arbitrary words, but found this approach to be less efficient than masking only entities. The improvement in performance was slower and, in many cases, negligible. We attribute this to the presence of many words in the text that are irrelevant to the core semantics. **We argue that nouns are the most critical**, primarily because **CLIP features** are more focused on objects. Since CLIP is trained on static images, it emphasizes static visual features and lacks the dynamic visual priors embedded in verbs. Furthermore, in the benchmarks used in this study, successfully identifying the primary nouns is often sufficient for producing high-quality reasoning results. This could explain why masking and reconstructing verbs did not lead to significant improvements.\\n\\nWe believe that reconstructing more types of text, such as verbs (which directly correspond to temporal features), could be beneficial if not constrained by the limitations of the feature extractor. In future work, we plan to explore this hypothesis using more powerful pre-trained feature extractors or large-scale temporal pre-training methods to validate the potential benefits of reconstructing arbitrary words.\"}"
]
} |
1Uem0nAWK0 | Inference time LLM alignment in single and multidomain preference spectrum | [
"sadat shahriar",
"Zheng Qi",
"Nikolaos Pappas",
"Srikanth Doss",
"Kishaloy Halder",
"MONICA SUNKARA",
"Manuel Mager",
"Yassine Benajiba"
] | Aligning Large Language Models (LLM) to address subjectivity and nuanced preference levels requires adequate flexibility and control, which can be a resource-intensive and time-consuming procedure. Existing training-time alignment methods require full re-training when a change is needed and inference-time ones typically require access to the reward model at each inference step. To address these limitations, we introduce an inference-time model alignment method that learns encoded representations of preference dimensions, called Alignment Vectors (AV). These representations are computed by subtracting the base model from the aligned model as in model editing enabling dynamically adjusting the model behavior during inference through simple linear operations. Even though the preference dimensions can span various granularity levels, here we focus on three gradual response levels across three specialized domains: medical, legal, and financial, exemplifying its practical potential. This new alignment paradigm introduces adjustable preference knobs during inference, allowing users to tailor their LLM outputs while reducing the inference cost by half compared to the prompt engineering approach. Additionally, we find that AVs are transferable across different fine-tuning stages of the same model, demonstrating their flexibility. AVs also facilitate multidomain, diverse preference alignment, making the process 12x faster than the retraining approach. | [
"LLM",
"Alignment",
"inference"
] | Reject | https://openreview.net/pdf?id=1Uem0nAWK0 | https://openreview.net/forum?id=1Uem0nAWK0 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vW75zSeIY6",
"ZhWuriA4We",
"XYwViidYpt",
"OL8kHyMgSd",
"Mh3zPi2zot",
"FKVnKsAitC",
"DmzHnlxeEX",
"DQJEWgZPFW",
"Cv6sNqoT9i",
"B5usDLzxjY",
"6iv8jNsept",
"2RoRg7zn2J"
],
"note_type": [
"official_review",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment",
"decision",
"official_comment"
],
"note_created": [
1730556853578,
1734797089281,
1729712883153,
1732689615280,
1732689789434,
1730606348161,
1730411673039,
1732718564186,
1732689696484,
1739609904399,
1737523859643,
1732679033938
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7739/Reviewer_fkLz"
],
[
"ICLR.cc/2025/Conference/Submission7739/Area_Chair_yxsp"
],
[
"ICLR.cc/2025/Conference/Submission7739/Reviewer_82kZ"
],
[
"ICLR.cc/2025/Conference/Submission7739/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7739/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7739/Reviewer_Hnz3"
],
[
"ICLR.cc/2025/Conference/Submission7739/Reviewer_jyJs"
],
[
"ICLR.cc/2025/Conference/Submission7739/Reviewer_Hnz3"
],
[
"ICLR.cc/2025/Conference/Submission7739/Authors"
],
[
"~sadat_shahriar1"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7739/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper presents a preference alignment approach that only aligns during inference, using encoded representations called Alignment Vectors (AVs). The AVs are learned and tuned for the same model in different tuning stages, which shows good transferability across different domains. The authors also build a diverse domain-specific dataset with responses categorized into three levels. Extensive experiments demonstrate that AVs can help LLMs align to different domains and show promising performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper presents a simple and effective idea to align the preferences of LLMs in inference time. The transferability of this approach across different domains is good.\\n\\n2. The authors have also built a large dataset that contains responses in avoidance, generic responses, and expert opinions.\\n\\n3. The AVs offer flexibility to adjust the level of LLMs in generation by adjusting their weights.\", \"weaknesses\": \"1. The work aims to align LLMs during inference, and I agree that \\\"it requires full re-training when a change is needed.\\\" However, AVs are the subtraction of an aligned model and an unaligned model. Alignment during inference is to the unaligned one, making it return to the aligned model. If I understand correctly, this process still requires training and not fully inference-time alignment.\\n\\n2. Although this inference-time alignment method reduces the training cost, it requires two times inference, i.e., unaligned models and AVs.\\n\\n3. The dataset is built upon prompting Claude to generate different responses at different levels. Although the languages are appropriate to these levels (e.g., experts) and express relevant concepts, such as medical terms, are their content appropriate as well? For example, is a medical case resolved by LLMs, or do these LLMs only create or even hallucinate something to meet the prompts' requirements? The practicality of this alignment method is still awaiting to examine in this regard.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper studies the adjustment of LLM behavior at inference time using alignment vectors (AV), defined as the difference between the weights of a model aligned to a particular dimension. The paper creates a synthetic dataset by querying Claude-3-Sonnet to create personas for three domains. Responses are generated for three levels of engagement. Alignment then occurs via linear adjustment of weights from a previous unalignment model.\", \"strengths\": [\"Synthetic data is released for future reproducibility\", \"The method is simple and well-explained\"], \"weaknesses\": [\"Evaluation of the model is insufficient. Automated validation comes from one GPT-4 model, and ground-truth human evaluation is limited. The paper relies on constructed metrics such as preference accuracy and GPT-4 judged generation accuracy, which make it hard to tell if the results are meaningfully strong.\", \"It is not clear if the problem setup of creating personas and responses is actually appropriate for the stated settings (e.g., medical, legal). More validation metrics would improve confidence in the experiment pipeline. Are these responses correct? Is the generated data faithful to the needs of reality?\", \"My decision is based on the issues with problem setup and evaluation.\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers raised consistent concerns about the evaluation, clarity of the writing including definition of metrics like \\\"preference accuracy\\\", and usability/relevance of the problem setup. The paper authors addressed several concerns, but many of the author responses acknowledged the weaknesses and left the solutions to future work. The repeated issues across all reviews related to evaluation and relevance were large factors in my decision.\"}",
"{\"summary\": \"This paper proposes an LLM alignment method at inference time which has not been well studied. On top of preference tuning at inference time, they propose a model editing technique called alignment vector arithmetic (subtracting base model with aligned model at inference time) strengthening the methods sections of this paper. It appears there method on inference time alignment performs quite strongly in the three domains under three different instruction types (avoid, generic, and expert). From these three expert instruction type appears to do the best overall. Performance metrics were measured but were observed with some level of hesitancy and there were not many inference time alignment approaches making it difficult to assess. Authors can potentially show the benefits of inference time alignment versus that during training to further motivate the problem.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper is well motivated. It is true that there has been limited study on aligning LLMs at inference time.\", \"The paper presents two clear research questions that they will address.\", \"results show nearly maximal performance.\"], \"weaknesses\": [\"The selection of LLM is not well motivated? Why did you use Claude-3-Sonnet over GPT4 or even open source models like Llama-2/3?\", \"minor attention to detail but keep writing conistent. I see instances were \\\\citep was used and where \\\\cite was used.\", \"Not sure I gree with the multidomain preference approach. Seems that instead of building a generalist AI, experts in the field would prefer a specialized version of the LLM. However I will listen to the authors justification in the rebuttal period.\", \"please formalize a mathematical definition of the preference accuracy.\", \"the task is not super clear. Figure 2 looks amazing but I'm not sure what was done to achieve this.\", \"Writing clarity can be improved. They talk about using Claude then in the section 5.3 they say they use mistral 7b. LLM selection is also not properly motivated.\", \"Paper can motivate the need for inference time alignment over conventional approaches.\"], \"questions\": [\"How do you check what a valid persona-query pair is? How were 13k, 12.3k, 12.8k selected? Is it based on non-repetitive persona-query samples alone or was there for Quality control involved? (section 3.1)\", \"Were the annotators human or was it machine annotations? (section 3.3)\", \"How can you be certain the LLM generation can serve as a ground truth?\", \"Is it better to have an LLM that is aligned to one domain instead of all three domains (equation 3)? I imagine an expert in the field would feel indifferent if the specialized LLM for healthcare was also aligned with law, etc.?\", \"Are there other metrics to measure outside of preference accuracy? I think the benchmark otherwise is not robust enough given preference accuracy is a hand crafted metric from the authors.\", \"How are metrics like safety and helpfulness quanitfied. It was not written clearly?\"], \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"It appears this research has some level of human involvement participating in annotating data.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We would like to start by expressing our gratitude to the reviewer for their valuable feedback. Here is how we have addressed and reflected upon the comments:\\n\\n1. Rame et al.\\u2019s [1] work is closely related to our multi-domain preference alignment. However, their approach focuses on training-time alignment by interpolating weights from models fine-tuned on diverse rewards to achieve Pareto-optimality. In contrast, our work introduces a preference adjustment strategy that operates at inference time, in addition to achieving multi-dimensional alignment. Similarly, while Jang et al. [2] address personalized preference alignment and post-hoc merging, our approach provides a unique capability: preference level adjustment. This feature offers greater granularity and control, enabling dynamic and fine-grained customization of model behavior. We\\u2019d like to thank you for suggesting these papers, and we also included these details in our \\u201crelated work\\u201d section. \\n\\n2. We appreciate the concern regarding the theoretical motivation. While some previous works explored the weight interpolation, our research focuses on the control and tunability aspect of it by a rather \\u201cincremental\\u201d interpolation. Also, this area was unexplored for LLM alignment objectives.\\n\\n3. Next, we added the details of dataset creation, especially how the reported numbers were achieved, in appendix B. Basically, we generated queries across multiple domains and rigorously filtered the dataset by removing truncated, non-English, and incomplete entries while ensuring linguistic consistency and uniqueness. \\n\\n4. We also appreciate the paper restructure suggestion, and we moved the Methodology section before the \\u201cSynthesizing Specialized Preference Data\\u201d.\\n\\n5. Finally, we agree that using the term \\u201cpreference accuracy\\u201d can be ambiguous, and therefore, we added the mathematical description of this metric in Section 5.1\"}",
"{\"comment\": \"We\\u2019d like to express our gratitude for the kind words and feedbacks. Here\\u2019s how we address the comments:\\n\\n1. We agree that GPT-4 judged metrics can be stronger with human study. However, in a small-scale evaluation, we found that humans have an ~87% agreement with claude-based proficiency level response generation, and thus, a high preference accuracy score can also mean highly in agreement with human judgment, as preference accuracy measures the alignment level to preferred responses.\\n\\n2. We also agree with the reviewer feedback that we should use more LLMs to validate our claims, and we plan to do it in the next step (also outlined in section 7). \\n\\n3. We also want to mention that most test-time alignments are prompt-based, be it a prompt engineering technique or including special symbols/signs in the prompt. However, our research objective is to tune the preference level so that the granularity in the preferenc level could be achieved. We did not find the other existing alignment methods are directly comparable to our approach. This issue is also briefly discussed in section 2. \\n\\n**More details on \\u201cover-generalization\\u201d effect:**\\n\\nAligning LLMs with multiple training objectives can often be counterproductive, since these objectives can be orthogonal in many cases, due to complex human preference nature. We have seen safety alignment reducing helpfulness [1] and reasoning capabilities [2], indicating the aligned behavior being widespread across all preference dimensions. In our case, aligning a model for responding to medical queries with expert opinion also induces expertise for other domains like finance or legal. This indicates instead of picking up the \\u201cdomain-specific-proficiency-behavior\\u201d, the model picks up the \\u201coverall-proficiecncy-behavior\\u201d, which makes it difficult to do multi-domain alignment. \\n\\n[1]Tuan, Y. L., Chen, X., Smith, E. M., Martin, L., Batra, S., Celikyilmaz, A., ... & Bikel, D. M. (2024). Towards Safety and Helpfulness Balanced Responses via Controllable Large Language Models. arXiv preprint arXiv:2404.01295.\\n\\n[2]Alami, R., Almansoori, A. K., Alzubaidi, A., Seddik, M. E. A., Farooq, M., & Hacid, H. (2024). Alignment with preference optimization is all you need for llm safety. arXiv preprint arXiv:2409.07772.\"}",
"{\"summary\": \"This paper introduces a novel approach for adjusting Large Language Model (LLM) behaviors during inference time using Alignment Vectors (AV). The key innovation is treating alignment as a model editing problem where preference dimensions are encoded as vectors that can be dynamically combined with the base model through simple linear operations. The authors focus on three proficiency levels (expert, generic, and avoidance) across three specialized domains (medical, legal, and financial), demonstrating how their method enables flexible control over model outputs without requiring retraining. The work includes creation of a synthetic dataset with 38k query-response pairs and shows that their approach reduces resource usage by 12x compared to traditional retraining methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. A novel inference-time model editing technique using Alignment Vectors that allows dynamic adjustment of LLM outputs along preference dimensions without retraining or complex prompt engineering\\n2. A substantial synthetic dataset (38k examples) spanning three domains and three proficiency levels, with human-evaluated quality checks showing strong inter-annotator agreement\\n3. Demonstration that AVs can be effectively transferred across different fine-tuning stages of the same model while maintaining performance\\n4. A resource-efficient approach to achieving multidomain diverse behaviors that is 12x faster than traditional retraining methods\", \"weaknesses\": \"1. The evaluation based on GPT-4 judged metrics might need further validation with human study.\\n2. Validation is limited to only one model (Mistral-7b) - broader testing across different open-source LLMs would strengthen the findings.\\n3. Besides prompting, any test-time adaptation methods should be compare in the main experiments?\\n4. Any further illustrations on \\\"over-generalization effect\\\"?\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work proposes an approach to inference-time control over the alignment of a large language model to multiple, potentially competing preference dimensions. The approach defines an \\u201calignment vector\\u201d which is the difference between the weights of a model aligned to a particular dimension (e.g., using DPO or RLHF). The approach allows for smooth interpolation between the base model and the aligned model, on for any given dimension, as well as for choosing an operating point in a trade-off space between multiple dimensions. In this work, they investigate dimensions along the axes of specialized domains (Medical, Financial, and Legal) and subject matter proficiency. This is implemented by constructing 12,000-13,000 personas related to each of the specialized domains, generating LLM outputs with a prompt that emphasizes each proficiency level (avoidance, generic response, and expert response). They observe that the likelihood of the expert responses tend to increase as the mixture weights are tuned away from the base model towards that of the aligned model.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The simplicity of the approach is a major strength, in that inference-time alignment significantly reduces computational costs in cases where it is of interest to align to many potential reward mixtures over single or multiple preference dimensions.\", \"The work also includes a dataset contribution of the generated personas, which has potential for reuse in future work.\"], \"weaknesses\": [\"Unfortunately, this work may not be sufficiently novel nor sufficiently well-grounded in the related literature. I believe that the approach proposed in the present work is essentially a special case of the \\u201cRewarded Soups\\u201d and \\u201cPersonalized Soups\\u201d approaches proposed by Rame et al [1] and Jang et al [2]. In those prior works, they similarly propose inference-time weighted mixtures over models aligned to different reward functions. They also conduct much more extensive experiments and provide more rigorous theoretical motivation for the approach.\", \"The theoretical motivation is relatively superficial compared to related prior work (i.e., works that connect weight interpolation to linear mode connectivity).\", \"Few details are provided regarding the methodology for creating the persona dataset. For example, no details are provided about the \\u201cthorough clean-up, involving truncation, and reformatting\\u201d (Line 159).\", \"1. Rame, Alexandre, et al. \\\"Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards.\\\" Advances in Neural Information Processing Systems 36 (2023).\", \"2. Jang, Joel, et al. \\\"Personalized soups: Personalized large language model alignment via post-hoc parameter merging.\\\" arXiv preprint arXiv:2310.11564 (2023).\"], \"questions\": [\"Can you please review the concerns regarding novelty and clarify the contribution of the work in that context?\", \"As a suggestion, the paper structure could be improved for readability. I would recommend moving the \\u201cMethodology\\u201d section to be before the \\u201cSynthesizing Specialized Preference Data\\u201d. The \\u201cMethodology\\u201d section is the core contribution and it makes sense to center it. The \\u201cSynthesizing\\u201d section could also be combined more directly with the Experiments section, so that all relevant details concerning the experiments are presented together.\", \"As a suggestion, I think it would be better to not refer to the \\u201cpreference accuracy\\u201d and \\u201cGPT-4 judged generation accuracy\\u201d as accuracy metrics. This is because there is no comparison to a ground truth and thus it is not accurate to refer to these metrics as accuracy metrics. \\u201cLikelihood preference rate\\u201d and \\u201cGPT-4 judged rate\\u201d may be more appropriate names. In my opinion, calling the rates that are reported \\u201caccuracy\\u201d also lends itself to misleading claims regarding the performance of the approach (e.g., reading the reported 100% accuracy numbers as perfect performance, when it is more appropriate to think of them at the rate that a particular class of text was preferred).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your reply. I acknowledge that I have carefully reviewed the response and I would like to keep my positive score.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their valuable feedback and address the comments as follows:\\n\\n1. First, we agree that our method of inference time alignment still requires training the model to align. And you rightly pointed out, when AVs are added to the unaligned model, it returns to the fully aligned model. However, our objective is to achieve the preference alignment tunability in the inference time, which otherwise would require separate training objectives. We mainly replaced the expensive process of retraining with AV-based inference time model editing. \\n\\n2. The model editing process is done in O(n) times, being much faster than any training, and therefore, inference time also does not increase. Also, model editing is done before inference, and we only need to do it once instead of every inference time. Furthermore, in a multi-domain setting, training time alignment could have m^n training combinations (m=levels, default is 3), whereas inference time alignment only requires n times training, and provides finer-grained, continuous levels.\\n\\n3. Finally, we agree that we cannot be sure about the generated content in specialized domains being accurate or not. However, we found claude models being used to evaluate responses to medical questions, and it showed superior performance on National Board of Medical Examiners Sample Questions over GPT3.5 [1,2]. However, we will provide an in-depth LLM-generated response correctness evaluation in our future work (this part is added to our limitation section as well). (Please note, in Appendix E, we included some sample generations for the readers) \\n\\n[1] Abbas, A., Rehman, M. S., & Rehman, S. S. (2024). Comparing the Performance of Popular Large Language Models on the National Board of Medical Examiners Sample Questions. Cureus, 16(3).\\n\\n[2] Hosseini, P., Sin, J. M., Ren, B., Thomas, B. G., Nouri, E., Farahanchi, A., & Hassanpour, S. (2024). A Benchmark for Long-Form Medical Question Answering. arXiv preprint arXiv:2411.09834.\"}",
"{\"comment\": \"Thank you. We will address these concerns in the next cycle.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"We begin by thanking the reviewer for their feedback. Here\\u2019s how we reflected on the feedback:\\n\\n1. First, we want to clarify that, we use both GPT-4 and Claude 3 sonnet for different purposes: Claude for data generation, and GPT-4 for evaluation. We also added a clarification why claude-3-sonnet was used over GPT-4 model for synthetic data generation in section 4.1. We did this because claude performs comparably to GPT-4, often ranking among the top foundational models. Additionally, using GPT-4 as an independent evaluator helped avoid the known bias of models favoring their own outputs during evaluations. However, we agree that we should perform the alignment experiment with more open-source LLMs like Llama and Qwen, which we plan to do in the future (added to the conclusion). \\n\\n2. We corrected the citation inconsistencies and made all citations as \\u201c\\\\citep\\u201d. \\n\\n3. We agree that experts might want to have an expert model on one specific domain area, and our single-domain preference tuning idea exactly resonates with that application. However, as we explained in the third paragraph of the introduction, several business needs may require multidomain preference objective. This is specifically true when it comes to organizations operating in overlapping domains, like, balancing an insurance company's need for expert legal responses, generic financial answers, and avoiding medical responses. Handling all of them can be challenging, and deploying separate models could be resource-intensive. Joint training with targeted preferences offers a solution but that is inflexible and requires massive training effort.\\n \\n4. As per your suggestion, we added a detailed mathematical formulation for Preference Accuracy in section 5.1. \\n\\n5. We updated the description in Figure 2 . \\n\\n6. We used Claude-3-Sonnet only for synthetic data generation, GPT-4 for evaluating the generations. However, these models are not open-sourced, and therefore, we couldnot use them to experiment with our objective. Therefore, we used Mistral-7b for model editing and alignment, as an open source candidate model. \\n\\n7. We also appreciate your suggestion on why we need inference time alignment over conventional approaches, and we added a box in the introduction section to highlight the necessity of inference time alignment. Basically, in contrast to conventional training time approaches, inference time alignment provides flexibility and adaptability by enabling dynamic adjustments to model behavior based on task or user needs without retraining.\", \"answers_to_your_questions\": \"1. We checked the valid persona-query pair by manually investigating 50 samples. We estimate that at least 93% of all the samples are valid with 95% confidence (computed based on wilson score interval).\\nWe found Claude-3-sonnet reliably follows the instruction provided in our instruction prompt. To choose the number, we first recursively generated 15k queries for all domains, as explained in section 4.1. Next, we found 1-3% of queries and responses being cut short due to timeout and quota limit issues, which were removed. We also found a few queries where language was not English, and those were removed as well. Furthermore, we only considered complete sets of persona-query-three proficiency level responses, and we had to discard a few samples for that as well. Finally, we looked for repetitions of queries, and while some personas were very close, we didn\\u2019t find queries to be repetitive. After all these process, we ended up getting 13,000 personas for the medical domain, 12,374 personas for the financial domain, and 12,867 personas for the legal domain. We included this part in the appendix B. \\n\\n\\n2. There were only human annotators for evaluating the LLM generation quality\\n\\n3. This is an important question, and we also think this term can be misleading. In our paper, we used the term \\u201cground truth\\u201d only once in the section 4.2, to illustrate how we compute LLM and human agreement accuracy. However, we removed it, and rephrased the sentence as \\u201cWe also calculate the average annotation agreement by each annotator with the LLM generation. \\u201c\\n\\n4. A multidomain preference approach addresses the need for organizations operating in overlapping domains, such as balancing expert legal, financial, and medical responses, where deploying separate models is resource-intensive. Joint training with targeted preferences is inflexible and requires significant effort, making multidomain tuning a practical solution.\\n\\n5. We found the preference accuracy best describes the alignment quality. Basically it\\u2019s a measure where we compute the accuracy for the alignment level of LLMs, where we seek out of N total queries, how many the LLMs would reward the preferred answers. We also added the mathematical description of this metric on Q4 answer. However, if reviewers suggest, we will be happy to explore other metrics. \\n\\n6. We used the same metric \\u201cpreference accuracy\\u201d for safety and helpfulness.\"}"
]
} |
1UMxtR9Eb9 | Unifying Disentangled Representation Learning with Compositional Bias | [
"Whie Jung",
"Dong Hoon Lee",
"Seunghoon Hong"
] | Existing disentangled representation learning methods rely on inductive biases tailored for the specific factors of variation (e.g., attributes or objects).
However, these biases are incompatible with other classes of factors, limiting their applicability for disentangling general factors of variation.
In this paper, we propose a unified framework for disentangled representation learning, accommodating both attribute and object disentanglement.
To this end, we reformulate disentangled representation learning as maximizing the compositionality of the latents.
Specifically, we randomly mix two latent representations from distinct images and maximize the likelihood of the resulting composite image.
Under this general framework, we demonstrate that adjusting the strategy for mixing between two latent representations allows us to capture either attributes or objects within a single framework.
To derive appropriate mixing strategies, we analyze the compositional structures of both attributes and objects, then incorporate these structures into their respective mixing strategies.
Our evaluations show that our method surpasses or is comparable to state-of-the-art baselines such as DisDiff in attribute disentanglement (DCI, FactorVAE scores), and LSD and L2C in object property prediction tasks for object disentanglement. | [
"Unsupervised Representation Learning",
"Disentangled Representation Learning",
"Compositionality"
] | Reject | https://openreview.net/pdf?id=1UMxtR9Eb9 | https://openreview.net/forum?id=1UMxtR9Eb9 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ynecIBRRLQ",
"yIn5UYtFpc",
"xCJfcFeCCT",
"uS4od6xkt3",
"oln2KD8J7t",
"lo4L06WWnX",
"gBHlGgpP60",
"YR6wPjXa0h",
"Y3IBJdHria",
"XqPKldzduy",
"WklAKVz1tJ",
"VssHa35GZ9",
"VBBmMp6dhM",
"V8EyBTo1z9",
"UJcwvJBD9L",
"TF9xKLSrUn",
"QZxm9Lb9yI",
"NS4RLIacIK",
"HfFKedL3kL",
"FtbuHNVcNr",
"EceaparEtr",
"8YOio868mg",
"80tSpPXBps",
"5Xu0w87TRl",
"44GemwSiYf"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1733216905998,
1730488834892,
1732792942231,
1730685428020,
1733140202641,
1732794286988,
1731016224804,
1732795134643,
1730107837713,
1733009061931,
1732793750909,
1733150923659,
1733216694764,
1737523780355,
1732795180293,
1733317897002,
1732794318243,
1732794209476,
1732810072391,
1730506192286,
1732793286427,
1732793680611,
1732795271052,
1732795101121,
1732795285191
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_iDAj"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_XJvB"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_c3eL"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_bFYK"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_c3eL"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_iDAj"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Area_Chair_x6wV"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_bFYK"
],
[
"ICLR.cc/2025/Conference/Submission6613/Reviewer_cwzS"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6613/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Official Response to post rebuttal comment from Reviewer c3eL\", \"comment\": \"Thank you very much for your reply. We deeply appreciate your support and thoughtful feedback, which certainly strengthened our work.\"}",
"{\"summary\": \"The paper presents a framework for disentangled representation learning that targets both attribute\\u2014and object-based disentanglement within a single model. The authors formulate disentangled representation learning as maximizing the compositionality of randomly mixed latent representations of distinct images. The method uses a pre-trained diffusion model as an image generator and introduces an additional compositional consistency loss to encourage the composite images to remain faithful to the composite latent. The authors claim that their method can obtain superior performance in standard disentanglement benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Strengths:**\", \"The paper is relatively clear and easy to understand;\", \"The general idea of enforcing compositional consistency across mixed latent representations is fairly neat, and could possibly be extended to more challenging scenarios;\", \"The results seem to match or exceed some of the previous works on disentanglement benchmarks.\"], \"weaknesses\": \"**Weaknesses:**\\n\\n- The approach relies on a pre-trained diffusion model to ensure composite image realism, but this doesn\\u2019t guarantee alignment with the intended attribute or object combinations. As such, it is my understanding that this can compromise the interpretability and control of compositions in the general case, especially in more complex scenarios with subtle and/or hierarchical attribute/object relationships. \\n- There are no guarantees that the latent representations are identifiable under the current model, and by implication, neither are the compositions;\\n- The fixed mixing strategies, although appropriate for the simple cases studied, are quite rigid and likely would not adapt well to more complex scenarios in real data;\\n- The scope of the evaluation is limited to toy settings which is somewhat outdated given the recent progress in generative modelling.\\n- The writing is a little careless at times, there are numerous typos and/or grammatical issues some of which are mentioned below.\\n\\nIn my opinion, in its current state, this work largely sidesteps the key challenges in the area today, particularly the theoretical analysis of identifiability for latent representations and the development of scalable techniques that allow object-centric methods to be applied effectively in real-world settings. Therefore, I would encourage the authors to bolster their current contribution by tackling one of the two aforementioned challenges in the future.\\n\\n**Typo corrections:**\\n\\nline 34 \\\"theoretically prove\\\" \\\\\\nline 46 \\\"a unique object\\\" \\\\\\nline 70 \\\"and verify\\\" \\\\\\nsection 2 heading change to \\\"Background\\\" \\\\\\nline 77 \\\"incompatible with\\\" \\\\\\nline 97 \\\"that render\\\" \\\\\\nline 107 \\\"tailored specifically\\\" \\\\\\nline 122 \\\"maximizing the likelihood\\\" \\\\\\nline 122 \\\"disentangle attributes and objects\\\" \\\\\\nline 147 \\\"to the type of\\\" \\\\\\nline 163 \\\"While (Jung et al., 2024) rely\\\" \\\\\\nline 165 sentence needs rewriting for clarity \\\\\\nline 167 \\\"derive a specific\\\" \\\\\\nline 177 \\\"of each factor\\\" \\\\\\nline 177 \\\"derive a corresponding\\\" \\\\\\nline 188 \\\"independent sampling of\\\" \\\\\\nline 190 \\\"is equivalent\\\" \\\\\\nline 197 \\\"always contains\\\" \\\\\\nparagraph starting at line 206 could do with rewriting for clarity \\\\\\nline 216 \\\"belong to the same\\\" \\\\\\nline 259 \\\"While Jung et al. (2024) also maximize...\\\" \\\\\\nline 295 \\\"to each factor of\\\" \\\\\\nline 307 \\\"ensure reliable image generation\\\" \\\\\\nline 310 \\\"from scratch\\\" \\\\\\npage 6 footnote \\\"significantly\\\" \\\\\\n\\netc\", \"questions\": [\"What challenges do the authors anticipate in applying this model to real-world, complex datasets, and how might they address these?\", \"Could dynamic/learned mixing strategies replace fixed ones to improve flexibility in complex scenes?\", \"Have the authors thought about under which conditions their method can provide identifiability guarantees?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Response to Reviewer c3eL\", \"comment\": \"We thank the reviewer for valuable comments and suggestions. We have revised the paper to correct the typos and provide clarifications. Below, we respond to each of the individual questions.\\n> **Q1.** Complexity of the proposed approach leads to limited applicability and impact: the proposed approach requires the use of pretrained diffusion models to operate and requires access to composite images to train the model.\\n\\n**A1.** We appreciate the reviewer\\u2019s comment and acknowledge the point about the need for pretrained diffusion models.\\nFortunately, with recent advancements in diffusion models, large-scale, off-the-shelf, pretrained models (e.g., Stable Diffusion) are now readily accessible.\\nBy leveraging these models, we can avoid the additional computational burden of training diffusion models from scratch when working with real-world datasets. Furthermore, generating composite images does not introduce additional learnable parameters, as we reuse the diffusion decoder trained with an auto-encoding loss. Therefore, while our method introduces some computational costs for generating and validating composite images, we do not need additional learnable components, preserving the overall applicability.\\n> **Q2.** Limited performance increase in single-seed object disentanglement experiment.\\n\\n**A2.** We appreciate the valuable comment.\\nTo examine the robustness of our method in object disentanglement, we trained our model using three different random seeds and report the standard deviations in Table 19 in Appendix A.9. Due to the limited time budget during the rebuttal period, we conducted this analysis only for our method but we will include standard deviations for all baseline methods as well in the final version of the paper to provide a comprehensive comparison.\\nRegarding the limited performance increase in object disentanglement experiment, we would like to emphasize that the primary goal of our work is **not to achieve state-of-the-art performance in both attribute and object disentanglement**, but **to propose a unified framework capable of disentangling both attributes and objects.** While our method shows comparable performances to baselines in object disentanglement experiments, it is the only approach among the competitors that can successfully disentangles both attributes and objects within a single framework. We believe this novel capability of our method brings a meaningful contribution to the field.\\n\\n> **Q3.** Can authors elaborate on why the maximum likelihood is needed despite already enforcing low reconstruction error?\\n\\n**A3.** We appreciate the reviewer\\u2019s comment. Minimizing the reconstruction error ensures that each latent representation is informative for a given image, but it does not inherently guarantee compositionality or realism in the generated composite images. To address this, we need an additional mechanism to encourage composite images to be realistic. Since ground-truth images for composite images are not available, we employ a pre-trained diffusion model to estimate the likelihood of composite images. By maximizing the likelihood, the model is guided to produce realistic composite images, thereby facilitating the learning of meaningful compositional representations.\"}",
"{\"summary\": \"The paper attempts to tackle attribute and object disentanglement through the same mechanism as opposed to separate treatment by prior methods. Building on diffusion based decoding approaches that maximize compositionality, this paper lays emphasis on composing/mixing strategy of latents for object/attributes.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Addresses both attribute and object disentanglement by developing appropriate mixing strategy for latents. This is helpful to steer the field towards disentangling different types of factors of variation - eg properties of object and object themselves.\\n2. The paper gives an in depth analysis of the intricacies involved in optimizing for compositionality.\\n3. The paper is well written for the most part. There are appropriate visualizations in method and experiments that complement the text.\", \"weaknesses\": \"The impact of paper can be more by showing results on real world data\", \"questions\": \"Are there any further insights on the failure cases? Is it harder to compose attributes or objects?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Response to post rebuttal comment from Reviewer bFYK\", \"comment\": \"Thank you very much for your reply and for all of your detailed comments, which certainly strengthened our work.\\nWe are happy to hear that we have addressed most of your concerns. \\nIf you have any further questions or concerns, please do not hesitate to let us know. \\nAlthough the discussion period is nearing its end, we will do our best to address any remaining issues.\"}",
"{\"title\": \"Official Response to Reviewer iDAj (2/3)\", \"comment\": \"> **Q4.** The scope of the evaluation is limited to toy settings which is somewhat outdated given the recent progress in generative modelling.\\n\\n**A4**. We appreciate the valuable comments. Following the reviewer\\u2019s suggestion, we conducted additional experiments on CelebA-HQ for attribute disentanglement and MultiShapeNet (MSN) for object disentanglement, respectively. Experimental details and results are included in Appendix A.8. \\n\\nFor the CelebA-HQ dataset, we use the attribute-mixing strategy to disentangle attribute factors. To verify the disentanglement of the learned representations, we swap each latent vector one by one between two images and present the resulting composition images in Figure 6 and 7. In the third column of each figure, we observe that while the source images lack bangs, the swapped images successfully generate bangs while preserving other attributes. Similarly, in the fourth and fifth columns, the facial expressions (e.g., smile) and skin tones of the target images are effectively transferred to the source images. These qualitative results demonstrate that our attribute-mixing strategy is capable of disentangling attribute factors, even in complex datasets like CelebA-HQ.\\n\\nWe also validate our method on MSN dataset with object-wise manipulation and unsupervised segmentation. For the object-wise manipulation task, we encode pairs of images into $N=5$ object representations and exchange random object latents between the pairs to construct composite images. As shown in Figure 8, our method successfully performed object-level insertion and removal, demonstrating that each latent representation distinctly captures individual objects. This confirms that our approach effectively disentangles object representations within the latent space.\\n\\nFor the unsupervised segmentation task, we measure FG-ARI, mIoU, mBO on object masks following common practices in object-centric literature. As our method does not have a built-in mechanism to directly express group memberships between pixels, we additionally train Spatial Broadcast Decoder on the frozen latent representations to predict explicit object masks for each latent representation (please refer to A2 and appendix A.8 for details). The results are reported in Table 17 in Appendix A.8.\\nAmong the competitive slot-attention based baselines, our method achieves second-best performances across all of three metrics. The high segmentation scores of L2C are mainly due to its slot-attention-based regularization term (see Equation 8 in the L2C paper), which explicitly encourages the slot masks to align with object shapes. Excluding L2C, our method outperforms rests of the baselines (LSD, SLATE) across all metrics, despite not employing a spatial clustering mechanism like slot attention. These results demonstrate the effectiveness of our framework in disentangling object representations in a complex dataset.\\n\\n> **Q5.** What challenges do the authors anticipate in applying this model to real-world, complex datasets, and how might they address these?\\n\\n**A5**. One of the primary challenges in applying our model to real-world, complex datasets is the need for a diffusion model that can reliably estimate the likelihood of complex composite images. Fortunately, this challenge can be addressed by leveraging off-the-shelf diffusion models trained on large-scale datasets (e.g., Stable Diffusion) or fine-tuning them on the target dataset. Another practical challenge is that real-world datasets often consist of highly diverse and complex images, which may result in slower convergence if a randomly initialized encoder is used. To address this, employing a pretrained encoder (e.g., DINO) as commonly done in recent object-centric approaches, would likely improve training efficiency and representation quality.\\nLastly, our current mixing strategy is designed to disentangle attributes or objects separately, but it does not yet support discovering factors of variation with more complex compositional structures, such as jointly disentangling attributes and objects or handling hierarchical relationships. To address such cases, proper mixing strategies should be investigated to capture such intricate compositional structures. For example, to discover factors of variation with hierarchical structures, the mixing strategy could be adapted to enforce hierarchy-specific constraints, such as allowing exchanges only between nodes at the same level in the hierarchy. Exploring and embedding diverse compositional structures through advanced mixing strategies is an essential direction for future research, and we will investigate this in our future work.\"}",
"{\"summary\": \"This paper investigates the learning of disentangled representations in particular the adaptation of existing frameworks to the learning of representations that can disentangle both attributes (e.g., color, texture, ...) and objects in a scene which authors claim prior work only tacked one of the other. The authors propose to leverage compositionality to learn disentangled representations. The setup includes pre-trained VAEs which provide representations that are then combined. The new representations serve as input to a diffusion-based decoder which is trained to reconstruct the composition of the original images. A pre-trained diffusion model is also used to enforce consistency between the input composite representations and the representation of the generated image. The method is tested for feature and object disentanglement on multiple synthetic datasets where is shows either superior or comparable performance to attribute or object disentanglement methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"presentation: the paper is polished, clear, and well-written\", \"relevance of the topics: learning models that disentangle sources of information whether attributes or objects without any prior knowledge about the type of sources but rather that rely on general prior information about the data structure like compositionality to enforce disentanglement is of great nterest to the community.\"], \"weaknesses\": [\"complexity of the proposed approach leads to limited applicability and impact: the proposed approach requires the use of pretrained diffusion models to operate (i.e., to maximize the likelihood of composite images) and requires access to composite images to train the model.\", \"limited performance increase: while results show more consistent improvements for the **multi-seed** attribute disentanglement experiments, the gains are less consistent across metrics for the **single-seed** object disentanglement experiment.\"], \"minor\": [\"theta should be a subscript in line 187\", \"typo line 212, 281, 310\", \"error in figure 1: z3 should be blue instead of orange\", \"line 227: figure 1 above\", \"Not sure I am getting lines 210-213\"], \"questions\": [\"can authors elaborate on why the maximum likelihood is needed despite already enforcing low reconstruction error ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Response to Reviewer bFYK (2/5)\", \"comment\": \"> **Q3.** Could you provide results on at least a couple of the more complex datasets?\\n\\n**A3**. We appreciate the valuable comments. \\nFollowing the reviewer\\u2019s suggestion, we conducted additional experiments on CelebA-HQ for attribute disentanglement and MultiShapeNet for object disentanglement, respectively. Due to limited time in the rebuttal period, we conduct experiments on CelebA-HQ instead of CelebA, as we can employ a pretrained, off-the-shelf diffusion model ([3]) for the CelebA-HQ dataset. Experimental details and results are included in Appendix A.8.\\n\\nFor the CelebA-HQ dataset, we use the attribute-mixing strategy to disentangle attribute factors. To verify the disentanglement of the learned representations, we swap each latent vector one by one between two images and present the resulting composition images in Figure 6 and 7. In the third column of each figure, we observe that while the source images lack bangs, the swapped images successfully generate bangs while preserving other attributes. Similarly, in the fourth and fifth columns, the facial expressions (e.g., smile) and skin tones of the target images are effectively transferred to the source images. These qualitative results demonstrate that our attribute-mixing strategy is capable of disentangling attribute factors, even in complex datasets like CelebA-HQ.\\n\\nWe also validate our method on MSN dataset with object-wise manipulation and unsupervised segmentation. For the object-wise manipulation task, we encode pairs of images into $N=5$ object representations and exchange random object latents between the pairs to construct composite images. As shown in Figure 8, our method successfully performed object-level insertion and removal, demonstrating that each latent representation distinctly captures individual objects. This confirms that our approach effectively disentangles object representations within the latent space.\\n\\nFor the unsupervised segmentation task, we measure FG-ARI, mIoU, mBO on object masks following common practices in object-centric literature. As our method does not have a built-in mechanism to directly express group memberships between pixels, we additionally train Spatial Broadcast Decoder on the frozen latent representations to predict explicit object masks for each latent representation (please refer to A2 and appendix A.8 for details). The results are reported in Table 17 in Appendix A.8. \\nAmong the competitive slot-attention based baselines, our method achieves second-best performances across all of three metrics. The high segmentation scores of L2C are mainly due to its slot-attention-based regularization term (see Equation 8 in the L2C paper), which explicitly encourages the slot masks to align with object shapes. Excluding L2C, our method outperforms rests of the baselines (LSD, SLATE) across all metrics, despite not employing a spatial clustering mechanism like slot attention. These results demonstrate the effectiveness of our framework in disentangling object representations in a complex dataset.\"}",
"{\"summary\": \"This work proposes a framework to learn disentangled representations of either attributes (e.g., an object's color or orientation) or distinct objects within a scene. The frameworks begins by encoding a pair of images using a VAE encoder. The embeddings generated are $k$ vectors that eventually will be the disentangled representations. At this stage a mixer samples some vectors from image 1 and some vectors from image 2 generating the representation of a ``new\\u2019\\u2019 composed image. These new representation are then noised and denoised thanks to a diffusion model before going through the decoding stage of the VAE. The mixing component can be adjusted according to the desired inductive bias. For attribute disentanglement, the model enforces mutual exclusivity by ensuring each latent vector is sampled from only one of the two images. In contrast, for object disentanglement, this exclusivity constraint is removed, allowing, for instance, the first latent vector to be sampled from both images.\", \"the_objective_function_is_composed_of_three_terms\": \"(1) a latent denoising objective using a diffusion decoder (as in Jung et al., 2024); (2) a term to maximize the likelihood of the composed image, implemented as a diffusion loss, where the diffusion model is pre-trained for each task and then frozen; and (3) a consistency objective, which ensures that the latent representation $z$ of a given image and the latent representation re-encoded after decoding the reconstructed image from $z$ remain close. For this last term, the authors found that using an NCE-like objective, where each representation should be close to its counterpart and distant from other batch representations, outperformed simply minimizing cosine similarity.\\n\\nThe proposed method is evaluated against various baselines, datasets, and metrics for both attribute and object disentanglement, showing improved performance across the board.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is easy to read. The proposed framework leverages and combines many techniques (such as diffusion models, SSL, optimal transport) in interesting way. The final framework is simple and from the reported results effective.\", \"weaknesses\": [\"The main weaknesses of this paper are in the empirical evaluations. Specifically, some of the results reported do not match those previously published, a very common task used to assess object disentanglement (unsupervised segmentation) is missing, none of the experiments are done on realistic or complex datasets (although recent state-of-art works do employ those kind of datasets). These are the main points to be discussed during the rebuttal. Fixing these could increase the soundness and the contribution scores, hence, also the final recommendation score. See below for more details on all these weaknesses.\", \"Results reported in this work about other baselines do not seem to match the original results reported by the respective original papers on the same tasks and datasets. For example LDS property prediction in the original paper shows much better accuracy (80.23% on Shape, compared to the one reported in this work for LSD which is only 68.25%, for comparison the proposed method accuracy is 70.90%). For the properties \\u201cmaterial\\u201d and \\u201cshape\\u201d the differences is even higher.\", \"State of art works on object disentanglement consistently use unsupervised segmentation to assess the usefulness of the generated representations, however, these tests are missing from the current work. This is an important task because it shows a concrete application of these type of representations (and for ease of comparison given that all recent works use both unsupervised segmentation as well as property prediction).\", \"Both set of experiments (attributes and objects) lack of realistic or more complex datasets which state-of-the art have been using (in addition to some of the datasets used in this work). While it is not needed to have results on all of the following datasets, showing that the proposed method scales to the complexity of some of those datasets comparably to the state of art would make the contribution stronger. For example:\", \"For the attribute disentanglement FactorVAE uses CelebA.\", \"For Object centric Jung et al. 2024 use Super-CLEVR (multi-colored parts and textures), and MultiShapeNet (for realistic images), while other work such as Object Centric Slot Diffusion use the MOVi-C dataset (which contains complex objects and natural background), MOVi-E datasets (which contains up to 23 objects per scene), FFHQ (high quality image of faces).\"], \"other_minor_evaluations_weaknesses\": [\"Attribute disentanglement results are reported with standard deviation (great!) but it is unclear on how many runs. Results for object disentanglement are provided without any standard deviation (but they should).\", \"Minor Writing Comments. This writing suggestions are not critical but they would improve clarity and readability of the paper. No need to discuss them in rebuttal but they do need to be fixed and could increase the presentation score.\", \"I find the first part of the paper (until section 3) lacking important details that could easily be provided. For example:\", \"The abstract is very dry, there is no mention of which are the \\u201cstrong baselines\\u201d, nor which tasks this work was tested on, nor quantitative evaluation to show that the propose method \\u201cmatches or exceeds\\u201d baselines. Consider adding more information.\", \"From the abstract (and even the introduction and the beginning of section 3.1) it is not clear what \\u201cmix\\u201d, \\u201ccompose\\u201d, \\u201ccomposition operator\\u201d mean. It could be concatenation, averaging, summing\\u2026 it will only become clear much later but It would be great to provide more details if not in the abstract (ideal) at least in the introduction.\", \"Still by the end of Section 2 there is no formal definition of \\u201cattribute\\u201d and \\u201cobject\\u201d. The first example of attributes is at page 4. Having these definitions would help the reader understanding the work much better since the beginning of the paper. From the examples at page 4 it seems that nose is an attribute and face an object but it could easily be argued that actually nose is an object in itself, or that face is an attribute of a bigger objet (human body). Again this highlight the need for a formal definition of attributes and objects.\", \"In Figure1 there is a concrete image example but it is not clear if it belongs to Attribute mixing or Object mixing. The \\u201cthing\\u201d being mixed is a cylinder and a ball so why is it linked both to attributes and objects? It would be clearer to provide an example for both. Note that everything becomes clearer once the whole paper has been read but the first time the reader reaches Figure 1 this could be a source of confusion.\", \"At page 6 the authors say \\u201cThis occurs because the encoder can collapse the posterior p\\u03b8(z|x) into a single mode\\u201c. I know if this is an issue with posterior collapse. If the encoder collapses the posterior, then the first loss ($L_{diff}$) should become high hence preventing the collapse. The problem seems to be related to the fact that the learnt encoding is sufficiently different (hence not collapsed) to keep $L_{diff}$ while what the authors want is not just $\\\\hat{z} = z$ but also as different as possible with respect to other $z$s.\", \"Typo (?): \\u201cwe can without modifying the objective function, which will be introduced in next paragraph.\\u201d It is not clear what is that \\u201cwe can\\u201d.\", \"Typo: line 241 \\u201can noised\\u201d.\", \"The following sentence is incomplete: \\u201cwe adjust our image encoder to take VAE features as input\\u201d. Please clarify which kind of adjustments?\", \"\\u201cWhen back-propagate the gradient through xc, we truncate the gradient at the last iteration of decoding\\u201d. Why, it would be great to explain and motivate this choice.\", \"Typo in Line 310: \\u201cmodel on each training dataset from the scratch\\u201d. Should be \\u201cfrom scratch\\u201d.\", \"It would be great to explain how you understand which latent controls which factor. I believe there is a brief explanation in the appendix but it would be great if it could be explained in the main paper.\", \"In table 3 and some part of the appendix the loss term $L_con$ is called $L_cycle$. Please update it so that it is consistent throughout the paper.\"], \"questions\": \"Please address the main weaknesses listed above. These are the most critical ones, I find the paper interesting but these weaknesses do need to be tackled, specifically:\\nA. Could you explain or correct the mismatch between your results and those previously reported?\\nB. Could you provide results on unsupervised segmentation tasks using the three typical metrics: Adjusted rand index for foreground objects (FG-ARI), mean intersection over union (mIoU), and mean best overlap (mBO) (see Jung et al 2024 as an example).\\nC. Could you provide results on at least a couple of the more complex datasets listed above (and for the tasks used in the state of art work mentioned).\\n\\nAdditionally these are more questions that are interesting to discuss.\\n\\nD. The authors state at various points in the manuscript that previous methods use inductive biases specific to either attributes or objects, making them unsuitable for both simultaneously. For instance, in the statements, \\u201cExisting disentangled representation learning methods rely on inductive biases tailored for specific factors of variation (e.g., attributes or objects). However, these biases are incompatible with other classes of factors\\u201d and \\u201cUnlike previous methods, which introduce inductive biases tailored specifically to either attribute or object.\\u201d\\nHowever, the proposed method also requires a choice of mixing strategy tailored to either attributes or objects, which seems like an inductive bias itself, specific to one type of disentanglement. Could this advance choice also be considered a form of inductive bias that is specific to objects or attributes? Likewise, could state-of-the-art methods (e.g., Jung et al., 2024) also be modified to handle both attributes and objects? It\\u2019s unclear to me to what extent prior methods are fundamentally \\\"unable\\\" to address both types of disentanglement, as opposed their experiments being focused on of the the two tasks but potentially adaptable to the other in a way similar to how this proposed method can be adapted via choosing an appropriate mixing strategy.\\n\\nE. In Section 2 the authors make the following comment \\u201cin object-centric scenes, the same objects can appear in different spatial locations, complicating the definition of independence metrics for object representations\\u201d. It would be great to show qualitatively in examples like Figure 2 what happens when the image contains 2 identical objects and one of them is added or removed from the image. Would the proposed framework work or would there be a confusion among those object. I say this in part out of curiosity and in part because in Figure 3 (right 3rd column for inserting) it seems the model is confusing two similar objects and is adding the one in the back rather then one in the front. Could you provide those qualitative examples (if not possible in the rebuttal then in a potential future version of the paper).\\n\\nF. I could not find any detail (even in the appendix) about w(t). Could you please provide details about this function for both attribute and object tasks.\\n\\nG. The authors mention that Jung et al. use a similar prior term but since they use the same diffusion model (as opposed to a pre-trained and frozen one) they are measuring $p(x^c|z^c)$ rather than $p(x^c)$. I have two comments and questions about this:\\n1. Even when using a frozen diffusion model, wouldn\\u2019t the final decoded image be conditioned on $z^c$? \\n2. Regardless, I think this would be a good choice to compare. How does the current framework compare quantitatively to a similar framework that uses the term from Jung et al? Using Jung et al. solution would simplify the framework and reduce the need for training an extra model. Could you provide a comparison between these two options?\\n\\nH. For the DCI metric the authors say \\u201cwe perform PCA as post-processing on the representation before evaluation, following (Du et al., 2021; Yang et al., 2023)\\u201d. While I appreciate that this has been done before I wonder if it is a fair evaluation of disentanglement when it is applied only to some methods. Shouldn\\u2019t each vector $z_i$ be considered one of the \\u201cdimensions\\u201d. With PCA one is not measuring the disentanglement of each dimension but rather the disentanglement of a rotated version of the linear combination of the dimensions. This does not seem the same. Please help me understand why this makes sense and it is a fair evaluation, or if you agree with me that this is not a fair evaluation please compute and report the DCI score without PCA.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your clarification and additional results. The addition of seeds to the object disentanglement experiments confirmed that the proposed method performs on par or slightly worse than baselines.\\nWhile the weaknesses were all discussed by the authors, I don't believe the discussion motivates a score increase.\"}",
"{\"title\": \"Official Response to Reviewer cwzS (2/2)\", \"comment\": \"> **Q3.** Experiments are done on rather simple on rather simple synthetic datasets.\\n\\n**A3**. We appreciate the valuable comments.\\nWe conducted additional experiments on CelebA-HQ for attribute disentanglement and MultiShapeNet for object disentanglement, respectively. Experimental details and results are included in Appendix A.8.\\n\\nFor the CelebA-HQ dataset, we use the attribute-mixing strategy to disentangle attribute factors. To verify the disentanglement of the learned representations, we swap each latent vector one by one between two images and present the resulting composition images in Figure 6 and 7. In the third column of each figure, we observe that while the source images lack bangs, the swapped images successfully generate bangs while preserving other attributes. Similarly, in the fourth and fifth columns, the facial expressions (e.g., smile) and skin tones of the target images are effectively transferred to the source images. These qualitative results demonstrate that our attribute-mixing strategy is capable of disentangling attribute factors, even in complex datasets like CelebA-HQ.\\n\\nWe also validate our method on MSN dataset with object-wise manipulation and unsupervised segmentation. For the object-wise manipulation task, we encode pairs of images into $N=5$ object representations and exchange random object latents between the pairs to construct composite images. As shown in Figure 8, our method successfully performed object-level insertion and removal, demonstrating that each latent representation distinctly captures individual objects. This confirms that our approach effectively disentangles object representations within the latent space.\\n\\nFor the unsupervised segmentation task, we measure FG-ARI, mIoU, mBO on object masks following common practices in object-centric literature. As our method does not have a built-in mechanism to directly express group memberships between pixels, we additionally train Spatial Broadcast Decoder on the frozen latent representations to predict explicit object masks for each latent representation (please refer to A2 and appendix A.8 for details). The results are reported in Table 17 in Appendix A.8.\\nAmong the competitive slot-attention based baselines, our method achieves second-best performances across all of three metrics. The high segmentation scores of L2C are mainly due to its slot-attention-based regularization term (see Equation 8 in the L2C paper), which explicitly encourages the slot masks to align with object shapes. Excluding L2C, our method outperforms rests of the baselines (LSD, SLATE) across all metrics, despite not employing a spatial clustering mechanism like slot attention. These results demonstrate the effectiveness of our framework in disentangling object representations in a complex dataset.\\n\\n> **Q4.** How would this generalize to more complex datasets where the exact factors of disentanglement might not be known?\\n\\n**A4**. We appreciate the reviewer\\u2019s thoughtful question.\\nAs discussed in our response to **Q1**, we believe that knowing the exact information of factors of variation is not strictly necessary for our approach. The latent representation would be learned according to the given mixing strategy, which guides specific compositional structure that the latent should satisfy. When there are an extremely large number of factors of variation in complex datasets, it would likely be infeasible to disentangle each exact factor. We hypothesize that the model would instead learn to group factors of variation into meaningful clusters. If the factors of variation exhibit complex compositional structures, such as hierarchical relationships, extending our current mixing strategies would be necessary. For instance, a hierarchical structure could be explicitly defined by allowing mixing only between nodes at the same level, which could enable the discovery of hierarchical factors of variation. Investigating mixing strategies for datasets with diverse and complex factors of variation is an important and promising direction for future research, and we aim to explore this in our future work.\"}",
"{\"title\": \"Thank you\", \"comment\": \"I thank the authors for their detailed response. I agree with the authors that their approach serves primarily as ''a proof of concept'' and certainly has some merit. In the author's A3 response, a lot of assumptions are made about what the model \\\"would\\\" do in real-world settings by controlling the mixing strategy, but I'm not convinced that these are well-founded as it's not obvious what mixing strategy would enable unique and composable factors to be recovered. Since no new theoretical identifiability guarantees are given for their method and the results are not particularly strong relative to the baselines in my opinion, I remain sceptical that the contributions are significant enough to warrant a substantial score increase at this stage.\"}",
"{\"title\": \"Official Response to post rebuttal comment from Reviewer iDAj\", \"comment\": \"We sincerely thank the reviewer for the thoughtful comments and for recognizing that our work serves as a proof of concept and certainly has merit. We understand and agree with the reviewer that a theoretical foundation would strengthen our contribution. However, we respectfully believe that our current version still offers meaningful contributions to the field. We hope the reviewer will reconsider our work in light of these contributions.\\n\\n\\nWe would like to respectfully highlight that many state-of-the-art (SOTA) methods in this domain also do not provide theoretical identifiability guarantees [1,2,3,4,5,6]. Achieving identifiability often requires strong assumptions and thereby methods derived with guaranteed identifiability often suffer from limited performance. Instead, much of the existing work have focused on proposing practical and efficient necessary conditions for disentanglement, such as group constraints [7] or mutual information maximization [1,2,8]. Similarly, object-centric learning approaches, such as Slot Attention-based methods [4,5,6], promote object-level disentanglement via spatial exclusiveness but do not offer identifiability guarantees. In this context, our contribution lies in **proposing a novel, practical necessary condition\\u2014the mixing strategy\\u2014which enables disentanglement of both attributes and objects in a unified framework**. \\n\\n\\nWe also recognize the concern about the scalability of our mixing strategy in capturing diverse factors of variation in real-world settings. In Appendix A.8, the results demonstrate that our mixing strategy can achieve attribute and object disentanglement in more complex datasets such as Celeba-HQ and MultiShapeNet. Although the disentanglements were done on relatively simple factors, we note that existing state-of-the-art (SoTA) methods applied to the same datasets also primarily address basic factors such as global attributes (e.g., azimuth, skin tone, facial expression) [1,3] or object-level factors [4,6]. To the best of our knowledge, no prior work has yet addressed the disentanglement of complex (eg, hierarchical relations) or intricate factors of variation. This is a challenge for the field as a whole, not just our work. \\n\\n\\nIn this regard, we believe that our approach holds greater potential to generalize to complex factors of variation compared to prior methods. Previous frameworks are often designed to disentangle either only attributes or objects and lack inherent extensibility. In contrast, our unified framework, driven by the mixing strategy as a core inductive bias, is not limited to disentangling only attributes or objects. Instead, it can flexibly adapt based on the underlying compositional structure. For instance, hierarchical relationships among factors would be explicitly embedded though mixing strategy by allowing mixing only between nodes at the same hierarchical level, potentially enabling the discovery of such structures. We believe this flexibility represents an important contribution of our work, as our general framework could be extended to discover general factors of variations (eg, hierarchical factors) in complex scenarios. \\n\\n\\nLastly, regarding the comment that our results are \\u201cnot particularly strong relative to the baselines,\\u201d we would like to emphasize that our method **uniquely** enables the disentanglement of both attributes and objects within a single, uniform framework. This capability, combined with results that outperform or remain comparable to SOTA methods, highlights the practical significance of our approach.\\n\\n\\nWe deeply appreciate the valuable feedback and the effort the reviewer have dedicated to reviewing our work. We hope the reviewer will consider reassessing our contributions in light of the broader context and the unique strengths of our method. \\n\\nReferences\\n\\n[1] Yang et al., Disdiff: Unsupervised disentanglement of diffusion probabilistic models, in NeurIPS 23. \\n\\n[2] Wang et al., Infodiffusion: Representation learning using information maximizing diffusion models, in ICML 23. \\n\\n[3] Ren et al., Learning disentangled representation by exploiting pretrained generative models: A contrastive learning view, in ICLR 20. \\n\\n[4] Jiang et al., Object-centric slot diffusion, in Neurips 23. \\n\\n[5] Wu et al., Slotdiffusion: Object-centric generative modeling with diffusion models, in Nuerips 23. \\n\\n[6] Jung et al., Learning to Compose: Improving Object Centric Learning by Injecting Compositionality, in ICLR 24. \\n\\n[7] Yang et al., Towards building a group-based unsupervised representation disentanglement framework, in ICLR 22.\\n\\n[8] Lin et al., Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans, in ICML 20.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Official Response to Reviewer bFYK (3/5)\", \"comment\": \"> **Q4.** Could the proposed method's choice of mixing strategy be considered an inductive bias, similar to prior methods, and could state-of-the-art approaches also be adapted to disentangle both attributes and objects with modifications?\\n\\n**A4**. We appreciate the valuable comments. \\nWe clarify that our goal is not to eliminate inductive biases specific to factors of variation but to propose an **inductive bias that is compatible with disentangling multiple factors of variation (e.g., attributes and objects)**. Our method implements this bias in the form of a mixing strategy, which can flexibly disentangle either objects or attributes by adjusting only the mixing strategy, maintaining the same model parameterization and objective functions. In contrast, prior methods typically embed their inductive biases into the objective functions (e.g., information-theoretic objectives) or architectural designs (e.g., slot-attention encoder). It is non-trivial to adjust such objective functions or architectural designs to achieve both attributes and objects within a single framework and to the best of our knowledge, there are no successful methods for such adaptation. \\n\\nFor instance, information-theoretic objectives proposed in attribute disentanglement often aims to reduce statistical dependencies among latent variables. However, extending this objective function to disentangle objects in object-centric scenes\\u2014where the number of objects varies or identical objects appear in different spatial locations\\u2014is non-trivial and no such work has been reported. On the other hand, object-centric learning relies heavily on architectural biases that enforce spatial exclusiveness, which cannot naturally handle spatially non-exclusive attributes (e.g., color, shape, or texture). It is also not trivial how we should design architectural biases to promote disentanglement of spatially non-exclusive attributes. \\n\\nIn the same context, state-of-the-art methods like Jung et al. (2024) cannot simply disentangle both attributes and objects by simply modifying one component, such as the objective function, architecture, or mixing strategy; Specifically, (1) Even if the objective function or mixing strategy is adjusted, the slot-attention encoder inherently enforces spatial exclusiveness of factor, preventing disentanglement of spatially non-exclusive attributes. (2) Even when removing the slot-attention encoder, reuse of diffusion decoder as a generative prior hinders accurate likelihood estimation (see response to Q7) and absence of a compositional consistency loss further hinders effective attribute disentanglement (see Ablation Study in Table 3).\\nWe hope this explanation clarifies the clear contribution of our method and why prior methods have challenges in achieving both attribute and object disentanglements. \\n\\n> **Q5.** It would be great to show qualitatively in examples like Figure 2 what happens when the image contains 2 identical objects and one of them is added or removed from the image.\\n\\n**A5**. We appreciate the valuable comments. \\nWe would like to clarify that in Figure 3, the observed issue was not a confusion by our model but rather a misdrawn arrow for the target object to be inserted. We corrected the figure in the current version. Regarding the scenario of inserting or removing objects in a scene with multiple identical objects, we agree this is an intriguing question to explore. As our learning objective encourages the model to learn \\\"compositional\\\" concepts, we expect it to assign each object to distinct slots, even in scenes with multiple identical objects. We will conduct this experiment and include the results in the final version of the paper. \\n\\n> **Q6.** Details of w(t) in Equation 5.\\n\\n**A6**. We appreciate the valuable feedback. $w(t)$ is a timestep-dependent function derived by [4] and usually set to $\\\\sigma^2_t=1-\\\\bar\\\\alpha_t$, where $\\\\bar\\\\alpha_t$ is a hyper-parameter controlling the noise schedules in the diffusion model [5]. We add this information in the main paper.\"}",
"{\"metareview\": \"This paper unifies the disentanglement of objects and factors of variation within a single framework. I think this is an important direction that deserves more investigation, and this paper is a step in the right direction. The method combines different architectural components and training methodologies creatively and effectively, achieving reasonable results. After discussions with the reviewers, I decided to reject the paper for the following reasons:\\n\\n1. the results are not clearly better than competitors. However, it should be acknowledged that the authors did a good job, improving the complexity and realism of their experiments during the rebuttal period. **Suggestion:** I would encourage the authors to up their baseline choices, including recent works on object-centric disentanglement (e.g., neural systematic binder). I think clearly showing the benefits of the framework is key.\\n\\n2. there is a lot of theory around both disentangled representations and object-centric learning. This is completely ignored in the paper, but I think it should be central to the benefits of the framework. This is particularly relevant as the paper also relies on weak supervision signals, but it is not clear how they differ from or if they are stronger or weaker than what is already used in the vast literature on identifiable disentangled representations.\\n\\n3. I think that the writing needs to be made much more precise. I do not understand the difference between \\\"general factors of variation\\\" and \\\"nuanced and intricate factors of variation.\\\" These should be defined in the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers discussed with the authors and I also exchanged questions with them. I also read the paper myself, because I am interested in this area and the paper was otherwise borderline.\"}",
"{\"title\": \"Official Response to Reviewer iDAj (3/3)\", \"comment\": \"> **Q6.** Could dynamic/learned mixing strategies replace fixed ones to improve flexibility in complex scenes?\\n\\n**A6**. We appreciate the insightful question. Yes, we believe dynamic or learned mixing strategies would be an interesting and effective extension of our work to flexibly discover various factors of variations with complex compositional structures. When we apply a fixed mixing strategy, the model will consistently discover factors of variation satisfying the specific compositional structure we embed through the mixing strategy. However, in real-world scenarios, the compositional structure of the factors may vary across scenes. In such cases, applying a mixing strategy adaptive to the given scene would help capturing scene-dependent factors of variations. Moreover, learning valid mixing strategies directly from data to uncover the underlying compositional structures would be another promising future direction.\\n\\n> **Q7.** Have the authors thought about under which conditions their method can provide identifiability guarantees?\\n\\n**A7**. Guaranteeing identifiability generally requires imposing strong restrictions on the class of decoders or the latent distribution. In recent object-centric literature, additive decoders have been widely adopted to guarantee identifiability [2, 6, 7]. For object disentanglement, our method could leverage an additive decoder as well and define compositionality using compositional contrast as proposed in [6, 7]. These steps would be the first step to provide identifiability guarantees for object representations. However, for factors of variation like attributes, which globally affect the image, the additive decoder and compositionality definitions may not be satisfied in general. In such cases, we believe that ensuring identifiability would require imposing specific structural constraints on the latent distribution, as explored in [3, 8]. Therefore, it is challenging to immediately identify a unified set of conditions that guarantee identifiability for both attributes and objects simultaneously. A promising direction would involve first establishing conditions for identifiability for attribute and object separately, and then develop a general theory to integrate these conditions.\\n\\n[1] Brady et al., Provably Learning Object-Centric Representations, in ICML 23.\\n\\n[2] Lachapelle et al., Additive Decoders for Latent Variables Identification and Cartesian-Product Extrapolation, in NeurIPS 23.\\n\\n[3] Hyv\\u00a8arinen et al., Nonlinear ica using auxiliary variables and generalized contrastive learning, in AISTATS, 19.\\n\\n[4] Khemakhem et al., Variational autoencoders and nonlinear ica: A unifying framework, in AISTATS, 20.\\n\\n[5] Khemakhem et al., Ice-beem: Identifiable conditional energy-based deep models based on nonlinear ica, in NeurIPS 20.\\n\\n[6] Brady et al., Provably Learning Object-Centric Representations, in ICML 2023.\\n\\n[7] Wiedemer et al., Provable Compositional Generalization for Object-Centric Learning, in ICLR 2024.\\n\\n[8] Lachappelle et al., Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA, in Conference on Causal Learning and Reasoning, 2022\"}",
"{\"title\": \"Official Response to Reviewer iDAj (1/3)\", \"comment\": \"We thank the reviewer for valuable comments and suggestions. We have revised the paper to correct the typos and provide clarifications. Below, we respond to each of the individual questions.\\n\\n> **Q1.** Proposed method ensures composite image realism, but this doesn\\u2019t guarantee alignment with the intended attribute or object combinations.\\n\\n**A1**. We appreciate the valuable comment. As the reviewer pointed out, optimizing solely for realism of composite images may produce realistic images that do not align with the source attribute or object representations. We addressed such misalignment with compositional consistency loss. This loss explicitly enforces the accurate reconstruction of source latent representations from the generated composite images. By penalizing composite images that are realistic but fail to match the source latent representations, this loss ensures alignment between realistic composite images and intended latent compositions. We hope this clarifies our approach and addresses the reviewer\\u2019s concern.\\n\\n> **Q2.** There are no guarantees that the latent representations are identifiable under the current model, and by implication, neither are the compositions.\\n\\n**A2**. We agree with the reviewer that our method does not provide theoretical guarantees for the identifiability of latent representations or their compositions. While theoretical guarantees would certainly strengthen our work, **the primary focus of our paper is to demonstrate that the incompatible inductive biases traditionally used for disentangling attributes and objects can be replaced with a unified and compatible inductive bias in the form of a mixing strategy**. Furthermore, guaranteeing the identifiability of latent representations typically requires strong assumptions on the decoder function (e.g., [1,2]) or latent priors (e.g., [3,4,5]). As a result, much of the prior work in disentangled representation learning and object-centric learning often focus more on empirical performance rather than guaranteeing identifiability. Since our work prioritizes serving as a proof of concept, we also have adopted an empirical approach. Nevertheless, we agree that investigating when and how our framework can effectively learn object-centric representations and disentangle attributes within the context of identifiability theory would be a valuable direction for future research.\\n\\n> **Q3.** The fixed mixing strategies, although appropriate for the simple cases studied, are quite rigid and likely would not adapt well to more complex scenarios in real data.\\n\\n**A3**. We appreciate the valuable comment.\\nWe believe that the \\u201cfixed\\u201d mixing strategy could adapt to more complex scenarios as well. The role of mixing strategy is to define a specific form of compositional structure that the latent representation should satisfy. By controlling the mixing strategy, the model would discover different factors of variation aligned with the intended compositional structures. For instance, when an attribute mixing strategy is applied in real-world scenes, the model would learn to disentangle unique and composable factors that are always present in the scene (e.g., global lighting or style). Conversely, applying object mixing strategies to same scenes would guide the model to disentangle dynamically occurring object components within a scene. Therefore, we believe that the fixed mixing strategies are not a significant limitation and are applicable to both simple and complex scenarios.\"}",
"{\"title\": \"Post rebuttal comment\", \"comment\": \"Thanks to these authors for their thorough rebuttal, for providing many clarifications and performing additional empirical evaluation. The information provided in the rebuttal alleviate my main concern about the empirical evaluation, I am going to reflect this in the scores and the rating.\"}",
"{\"summary\": \"Note: I am not an expert on disentangled representation learning and know little/none of the related work.\\n\\nThe paper proposes an approach to learn a generative model for learning disentangled representations by maximizing the compositionality of representations. By mixing the representations of two images (given some constraints to make sure the results latent representations are valid) and maximizing the likelihood of the resulting composite images the model learns representations that can be disentangled on the object and attribute level. Experiments on synthetic datasets show that the model performs well in disentangling factors across several datasets both on the object and attribute level.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper addresses the learning of disentangled representations for both objects and attributes and makes use of a standard generative model for learning them. By introducing specific mixing strategies to combine latent representations of different images under given constraints the model is able to learn disentangled representations under a fairly simple framework.\\n\\nThe evaluation shows that the model learns better disentangled representations than the given baselines.\", \"weaknesses\": \"It seems like the approach is only useable if the practitioner already knows the underlying factors they want to disentangle, as the latent mixing strategies take this knowledge under account.\\nIt's also not clear to me if this would translate to real-world datasets with more complicated distributions.\\nThe experiments show results for either object disentanglement or attribute disentanglement but no experiments for joint object and attribute disentanglement.\\nAll experiments are done on rather simple synthetic datasets.\", \"questions\": \"How would this generalize to more complex datasets where the exact factors of disentanglement might not be known. Does this scale to lots of disentangled factors (dozens or hundreds) or would that make the mixing strategies too complicated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Response to Reviewer XJvB\", \"comment\": \"We thank the reviewer for valuable comments. Below, we respond to each of the individual questions.\\n\\n> **Q1.** The impact of paper can be more by showing results on real world data\\n\\n**A1.** We appreciate the valuable comments.\\nFollowing the reviewer\\u2019s suggestion, we conducted additional experiments on CelebA-HQ for attribute disentanglement and MultiShapeNet for object disentanglement, respectively. Experimental details and results are included in Appendix A.8.\\n\\nFor the CelebA-HQ dataset, we use the attribute-mixing strategy to disentangle attribute factors. To verify the disentanglement of the learned representations, we swap each latent vector one by one between two images and present the resulting composition images in Figure 6 and 7 in Appendix A.8. In the third column of each figure, we observe that while the source images lack bangs, the swapped images successfully generate bangs while preserving other attributes. Similarly, in the fourth and fifth columns, the facial expressions (e.g., smile) and skin tones of the target images are effectively transferred to the source images. These qualitative results demonstrate that our attribute-mixing strategy is capable of disentangling attribute factors, even in complex datasets like CelebA-HQ.\\n\\nWe also validate our method on MSN dataset with object-wise manipulation and unsupervised segmentation. For the object-wise manipulation task, we encode pairs of images into $N=5$ object representations and exchange random object latents between the pairs to construct composite images. As shown in Figure 8, our method successfully performed object-level insertion and removal, demonstrating that each latent representation distinctly captures individual objects. This confirms that our approach effectively disentangles object representations within the latent space.\\n\\nFor the unsupervised segmentation task, we measure FG-ARI, mIoU, mBO on object masks following common practices in object-centric literature. As our method does not have a built-in mechanism to directly express group memberships between pixels, we additionally train Spatial Broadcast Decoder on the frozen latent representations to predict explicit object masks for each latent representation (please refer to A2 and appendix A.8 for details). The results are reported in Table 17 in Appendix A.8.\\nAmong the competitive slot-attention based baselines, our method achieves second-best performances across all of three metrics. The high segmentation scores of L2C are mainly due to its slot-attention-based regularization term (see Equation 8 in the L2C paper), which explicitly encourages the slot masks to align with object shapes. Excluding L2C, our method outperforms rests of the baselines (LSD, SLATE) across all metrics, despite not employing a spatial clustering mechanism like slot attention. These results demonstrate the effectiveness of our framework in disentangling object representations in a complex dataset.\\n\\n> **Q2.** Are there any further insights on the failure cases? Is it harder to compose attributes or objects?\\n\\n**A2.** A failure case we observed occurs when a single latent encodes multiple factors of variation. However, this does not conflict with our compositional objective, as the objective remains satisfied even in such cases. We found that such issues can be mitigated by adjusting the weight of the compositional consistency loss. Specifically, the denominator of the compositional consistency loss includes a term that increases the distances between latents encoded from different images. This encourages the model to naturally encode distinct factors into separate latents, preventing the formation of empty or redundant latents in order to maximize the distance between each latent.\\nIn practice, we found that disentangling objects is a bit more challenging than disentangling attributes. This is due to spatial overlap between objects when they are randomly composed, which can occasionally result in unrealistic images. While this does not significantly impact overall training, it could cause slower convergence compared to attribute mixing, where mutually exclusive attributes are always composed without overlap.\"}",
"{\"title\": \"Official Response to Reviewer cwzS (1/2)\", \"comment\": \"We thank the reviewer for valuable comments. Below, we respond to each of the individual questions.\\n\\n> **Q1.** It seems like the approach is only usable if the practitioner already knows the underlying factors they want to disentangle, as the latent mixing strategies take this knowledge under account.\\n\\n**A1**. We appreciate the valuable comment.\\nWhile it is true that our current approach requires prior knowledge on target underlying factors to determine proper mixing strategy, we note that prior works also rely on such knowledge as they must select proper inductive bias (eg, either information-theoretic objectives for attributes or slot-attention encoder for objects). \\nHowever, we believe that our mixing strategies would be still applicable even without exact information about underlying GT factors of variation in the data. The mixing strategy serves as a general framework for defining the compositional structure we expect our latents to follow, allowing the model to discover factors that align with a user-defined compositional structure. For instance, applying a fixed attribute mixing strategy encourages the model to disentangle globally consistent and combinable factors (e.g., global lighting or style) that are always present in the scene. Conversely, applying object mixing strategies to the same scenes helps the model disentangle dynamically occurring components within a scene, such as individual objects. In this way, the choice of mixing strategy flexibly determines the type of factors being disentangled.\\n\\n> **Q2.** The experiments show results for either object disentanglement or attribute disentanglement but no experiments for joint object and attribute disentanglement.\\n\\n**A2**. We appreciate the insightful comment.\\nWe agree that our current experiments focus on either object or attribute disentanglement separately, without addressing joint disentanglement. This choice was made because our **primary goal in this work was to demonstrate the capability of our novel inductive bias (i.e., compositional bias) to disentangle objects or attributes within a single framework**. By presenting separate experiments for object and attribute disentanglement, we aimed to clearly highlight the effectiveness of our method in achieving comparable performance on both tasks. We agree that exploring joint object and attribute disentanglement is an important direction that aligns with the broader goals of our framework. Although it is beyond the scope of the current work, we consider it an important direction for future research.\"}",
"{\"title\": \"Official Response to Reviewer bFYK (4/5)\", \"comment\": \"> **Q7.** What is the difference from the prior term in Jung et al. and how does the proposed framework compare quantitatively to Jung et al.'s approach?\\n\\n\\n**A7**. We appreciate the thoughtful comments. \\nTo address the first question, our frozen diffusion model is an \\u201cunconditional\\u201d model that estimates $p(\\\\mathbf{x}^c)$ and does not rely on $\\\\mathbf{z}^c$ during estimating the likelihood. In contrast, L2C (Jung et al.) uses a conditional diffusion model that estimates $p(\\\\mathbf{x}^c|\\\\mathbf{z}^c)$, and thereby likelihood estimation is directly conditioned on $\\\\mathbf{z}^c$. Such distinction is crucial because estimation of $\\\\log p_\\\\psi(\\\\mathbf{x}^c |\\\\mathbf{z}^c)$ is inherently sensitive to the conditioning variable $\\\\mathbf{z}^c$. In L2C, the conditional likelihood estimator $p_\\\\psi(\\\\mathbf{x}^c | \\\\mathbf{z}^c)$ is learned using denoising losses with $\\\\mathbf{z}$ encoded from individual images $\\\\mathbf{x}$. When $\\\\mathbf{z}^1$ and $\\\\mathbf{z}^2$ are randomly composed to form $z^c$, the resulting $\\\\mathbf{z}^c$ may become out-of-distribution (OOD) samples (i.e., unseen sample during training). Consequently, the estimation of $\\\\log p(\\\\mathbf{x}^c | \\\\mathbf{z}^c)$ becomes inaccurate for OOD $\\\\mathbf{z}^c$, and also there is no guarantee that maximizing $p(\\\\mathbf{x}^c | \\\\mathbf{z}^c)$ for OOD $\\\\mathbf{z}^c$ yields realistic samples for $\\\\mathbf{x}^c$. In contrast, our method employs an unconditional diffusion model pre-trained to estimate $p(\\\\mathbf{x}^c)$ and it ensures robust estimation of $\\\\log p(\\\\mathbf{x}^c)$ regardless of $p(\\\\mathbf{z}^c)$. Therefore, our prior term is more robust for maximizing the likelihood of $\\\\mathbf{x}^c$ compared to that of L2C. \\n\\nNevertheless, we agree that comparing our method to L2C \\u2019s approach would strengthen the analysis. To address this, we conducted experiments on CLEVRTex replacing our prior loss with the one proposed in L2C. The results are presented in Table below. Those results clearly highlight the superior performance of our prior term over L2C. We hope this explanation and comparison clarify the advantages of our approach.\\n\\n| | Shape ($\\\\uparrow$) | Material($\\\\uparrow$) | Position ($\\\\downarrow$) |\\n|------|:----------------:|:----------------:|:------------------:|\\n| Ours | **70.90** | **52.20** | **0.133** |\\n| Ours + L2C prior | 54.58 | 27.18 | 0.165 |\\n\\n> **Q8.** Elaboration on why PCA is used when computing DCI and the resulting values are fair\\n\\n**A8**. We did not apply PCA to the entire latent representation. Instead, PCA was performed on each vector $\\\\mathbf{z}_i$, and the principal component with the highest principal value was selected. This adaptation was necessary because DCI is typically computed dimension-wise, but we cannot treat vector representations as scalars. By extracting the dominant component for each $i\\\\mathbf{z}_i$, we can compute DCI while maintaining a fair evaluation across methods. We hope this explanation resolves the concern.\\n\\n> **Q9.** Standard Deviation of experiments. \\n\\n**A9**. We appreciate the valuable comment. In attribute disentanglement, we measure the performance for 10 different runs following DisDiff. Also, following the reviewer\\u2019s suggestion, we train our model with three different seeds and report the standard deviations in Table 19 of Appendix A.9. Due to the limited time for rebuttal period, this was conducted only for our method, but we will include standard deviations for all baselines as well in the final version of the paper.\\n\\n> **Q10.** Modify Figure 1 to prevent confusion with mixing examples\\n\\n**A10**. We appreciate the valuable suggestion. Following the suggestion, we modified the figure to clearly indicate that the illustrative example represents the object mixing strategy by adding a labeled arrow and explicitly clarifying it in the caption. While we first considered including examples for both attribute mixing and object mixing as suggested, the limited space in the figure made it challenging to accommodate both.\"}",
"{\"title\": \"Official Response to Reviewer bFYK (1/5)\", \"comment\": \"We thank the reviewer for valuable comments and suggestions. We have revised our paper and provide clarification following the reviewers\\u2019 writing suggestions. Below we respond to the individual questions.\\n\\n> **Q1.** Could you explain or correct the mismatch between your results and those previously reported?\\n\\n**A1**. We appreciate the valuable comments. \\nThe difference in reported metrics between baseline results is due to different experimental setups. In LSD's original experiments, the encoder $E_\\\\theta$ directly encodes RGB images into slot representations, whereas in our experiments, encoder $E_\\\\theta$ gets latent features obtained from a pretrained vae [1] as input. This difference comes from our effort to align input and output formats for all baselines to ensure fair and direct comparisons across methods. However, prior works on object-centric learning adopt diverse input and output formats (RGB image, VAE feature, DINO feature etc) across different models, which hinders direct comparison across the methods. To address this, we aligned both the input and output formats to latent features encoded by a pretrained vae [1] for all methods. Despite this modification to latent encoder for baselines, the results in Table 15,Table 16 and Figure 5 in Appendix A.7 consistently demonstrate that baselines still effectively learn object-centric representations under this standardized setup. \\n\\nTo further address the concerns, we additionally trained our method using an RGB image encoder (the only modification) identical to LSD and L2C, and reported the property prediction results in the Table below. For comparison with both LSD and L2C, we followed the evaluation protocol of the L2C paper and compared our results to the values reported in the L2C paper. As shown in Table below, our method still achieves comparable results to state-of-the-art baselines, re-ensuring that our novel mixing strategy provides a robust inductive bias for learning object-centric representations. We hope our explanation along with the additional experiments showing comparable performance to baselines, addresses the reviewer\\u2019s concerns.\\n\\n| | Pos ($\\\\downarrow$) | Shape ($\\\\uparrow$) | Material($\\\\uparrow$) |\\n|------|:----------------:|:----------------:|:------------------:|\\n| SLATE+ |0.1757|78.72|67.99|\\n| LSD |0.1563|85.07|_82.33_|\\n| L2C |_0.1044_ |**88.86**|**84.29**|\\n| Ours |**0.1033**|_86.43_| 78.20|\\n\\n\\n> **Q2.** Could you provide results on unsupervised segmentation tasks using FG-ARI, mIoU, mBO\\n\\n**A2**. We appreciate the insightful comments. \\nFollowing the reviewer\\u2019s suggestion, we additionally evaluated unsupervised segmentation quality of pretrained encoder and reported it in Appendix A.7. Unlike slot-attention-based methods, our method does not have a built-in mechanism to directly express group memberships between pixels. Therefore, we trained a Spatial Broadcast Decoder [2] on top of frozen latent representations to predict explicit object masks for each latent representation. We train Spatial Broadcast Decoder with a reconstruction loss to recover the original image from frozen latents in an unsupervised manner, and it requires minimal training costs as the encoder remains frozen and the decoder is shallow. With this explicit object mask, we compare our method against two strong baselines in slot-attention-based works, LSD and L2C, on CLEVR and CLEVRTex. For a fair comparison, we evaluate the baselines using both slot-attention mask and object masks obtained by training a Spatial Broadcast Decoder on their frozen slot representations. The results are reported in Table 15,16 and Figure 5 in Appendix A.7. \\n\\nOn the CLEVR , our method achieved the best mIoU, mBO scores and comparable FG-ARI. A high FG-ARI of our method indicates that each mask captures complete objects, confirming effective object disentanglement of our method. However, we observed that the background mask is split across multiple latents. This is because constant backgrounds in CLEVR does not affect compositional generation and thereby avoiding penalties from the compositional loss. Since the constant background carries minimal information, it does not impact the quality of object representations and compositionality, and we do not consider it a problem from an object-centric representation perspective. In the CLEVRTex, our methods outperforms both LSD and L2C for all three metrics. In Figure 5, we observed that our method consistently encodes complete objects into distinct latents, whereas LSD and L2C often split objects into multiple latents. Also, in contrast to CLEVR, as CLEVRTex has various background colors, our model successfully encodes all of the background information into a single latent. \\nThese experiments on unsupervised segmentation confirm that our pretrained encoder achieves effective object-wise disentanglement. Notably, our method outperforms baselines in object segmentation without relying on spatial clustering architectures like slot attention.\"}",
"{\"title\": \"Official Response to Reviewer bFYK (5/5)\", \"comment\": \"> **Q11.** Clarification on Compositional Consistency Loss\\n\\n**A11**. We acknowledge that our explanation may have been misleading. The reviewer\\u2019s understanding is correct. The issue we describe is not posterior collapse in the traditional sense. Rather, it refers to a scenario where reconstruction is successful ($(L_{\\\\text{diff}}$ is very low), but $\\\\mathbf{z}^i$ and $\\\\mathbf{z}_j$ become close in the latent space for every data index $i, j$. In this situation, even if $\\\\mathbf{x}^c$ generates an image irrelevant to $\\\\mathbf{z}^c$, the penalty from $d(\\\\mathbf{\\\\hat z}^c, \\\\mathbf{z}^c)$ remains low, reducing the effectiveness of the compositional consistency loss. To effectively penalize such cases, we introduce a contrastive term to ensure that $\\\\mathbf{\\\\hat z}^c$ remains close to $\\\\mathbf{z}^c$ but as different as possible with respect to other negative samples $\\\\mathbf{z}_j$. We have updated the main paper to clarify this point and better explain how our method addresses this issue. \\n\\n> **Q12.** Motivation of gradient truncation trick\\n\\n**A12**. As diffusion models require iterative denoising steps to decode $\\\\mathbf{z}^c$ into $\\\\mathbf{x}^c$, it is computationally prohibitive to back-propagate the gradients through all of the denoising steps. To address this, we draw inspiration from recent works [6, 7] in diffusion-based optimization. These studies demonstrate that truncating gradients at the last iteration of the denoising process effectively balances computational feasibility and optimization performance. Following this approach, we truncate the gradient at the last iteration of the denoising step to ensure efficient back-propagation of the gradient. We added this detail in the main paper. \\n\\n[1] Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, in CVPR 22. \\n\\n[2] Watters et al., Spatial broadcast decoder: A simple architecture for learning disentangled representations in vaes, in Arxiv 19. \\n\\n[3] https://huggingface.co/CompVis/ldm-celebahq-256\\n\\n[4] Poole et al., Dreamfusion: Text-to-3d using 2d diffusion, in ICLR 2023. \\n\\n[5] Ho et al., Denoising Diffusion Probabilistic Models, in NeurIPS 2020. \\n\\n[6] Clark et al., Directly fine-tuning diffusion models on differentiable rewards, ICLR24. \\n\\n[7] Aligning Text-to-Image Diffusion Models with Reward Backpropagation, ArXiv.\"}"
]
} |
1ThYY28HXg | GenXD: Generating Any 3D and 4D Scenes | [
"Yuyang Zhao",
"Chung-Ching Lin",
"Kevin Lin",
"Zhiwen Yan",
"Linjie Li",
"Zhengyuan Yang",
"Jianfeng Wang",
"Gim Hee Lee",
"Lijuan Wang"
] | Recent developments in 2D visual generation have been remarkably successful. However, 3D and 4D generation remain challenging in real-world applications due to the lack of large-scale 4D data and effective model design. In this paper, we propose to jointly investigate general 3D and 4D generation by leveraging camera and object movements commonly observed in daily life. Due to the lack of real-world 4D data in the community, we first propose a data curation pipeline to obtain camera poses and object motion strength from videos. Based on this pipeline, we introduce a large-scale real-world 4D scene dataset: CamVid-30K. By leveraging all the 3D and 4D data, we develop our framework, GenXD, which allows us to produce any 3D or 4D scene. We propose multiview-temporal modules, which disentangle camera and object movements, to seamlessly learn from both 3D and 4D data. Additionally, GenXD employs masked latent conditions to support a variety of conditioning views. GenXD can generate videos that follow the camera trajectory as well as consistent 3D views that can be lifted into 3D representations. We perform extensive evaluations across various real-world and synthetic datasets, demonstrating GenXD's effectiveness and versatility compared to previous methods in 3D and 4D generation. | [
"3D Generation; 4D Generation; Diffusion Models"
] | Accept (Poster) | https://openreview.net/pdf?id=1ThYY28HXg | https://openreview.net/forum?id=1ThYY28HXg | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wqPZx24mBZ",
"tX1EIQqCUB",
"Fr3pGPDsXD",
"FJa2b0BP82",
"79L7VqTEvr",
"3nh3XuH7Kx"
],
"note_type": [
"official_review",
"official_review",
"meta_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1730565178081,
1730594826911,
1735043010559,
1730414520497,
1737523575942,
1730487615315
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3438/Reviewer_x1Cy"
],
[
"ICLR.cc/2025/Conference/Submission3438/Reviewer_rGgP"
],
[
"ICLR.cc/2025/Conference/Submission3438/Area_Chair_fUWH"
],
[
"ICLR.cc/2025/Conference/Submission3438/Reviewer_oiM6"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3438/Reviewer_nZJ1"
]
],
"structured_content_str": [
"{\"summary\": \"The paper trained a video generation model that can control camera trajectory and magnitude of motion and supports multiple frame conditioning.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper shares technical details on how to annotate magnitude motion and camera poses from in the wild videos. The alpha-fusion layers for motion disentangle seems an interesting design.\", \"weaknesses\": \"First, I feel the claim of being able to perform 4D generation is an over-claim to me. 4D generation requires the capability of either directly generating 4D representations such as dynamic 3D GS, or at least generating synchronized multi-view videos like in SV4D. Neither of these capabilities were presented in the main paper. In table 1, the capability of generating synchronized videos were not discussed, and to me, this is a severe misrepresentation. It would be more appropriate for the author to rebrand their method as a motion-controllable and 3D-aware video model.\\n\\n2nd, although the idea of using alpha-fusion seems interesting, it is currently not properly evaluated. It did not show how changing alpha values affects the magnitude of generated motions, and it did not evaluate the camera control accuracy as other related papers did. Reporting CLIP-score and FID is not enough to reflect the accuracy of the proposed capability of the method.\\n\\n3rd, a minor point, I am not sure promoting the capability of taking multiple image input can be regarded as a major technical contribution, given it is already supported in prior works including CAT3D, and it is conceptually trivial to be implemented in most video generation models.\", \"questions\": \"The author should provide rigorous analysis of the accuracy of the camera-controll capability, and how changing alpha values affects the generated motions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes GENXD, a latent diffusion model for 3D/4D generation of objects or scenes. Specifically, it adopts masked latent conditions to support various number of input views, and the alpha-fusing mechanism allows joint training on 3D and 4D data. Considering the lack of 4D scene dataset, the authors further curated a new dataset, CAMVID-30K, by estimating camera with a SfM-based method and filtering out videos without object motion. Qualitative and quantitative results show that the proposed method generates comparable or slightly more satisfactory outputs than corresponding prior arts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"S1: Sensible model design\\nAlthough the masked latent conditioning is not new, the architectural modification upon SVD is sensible and allows joint training on 3D and 4D data.\", \"s2\": \"General model for 3D and 4D generation\\nThe proposed model is capable of 3D and 4D generation of both object-centric and scene-level videos, which is more general than most prior methods. On a side note, the authors should also include MotionCtrl in Table 1.\", \"s3\": \"Good writing\\nThe paper is well-written and easy to follow overall.\", \"weaknesses\": \"W1: Limitation of camera pose estimation\\nThe proposed camera pose estimation relies on segmentation of all moving pixels. However, in scenarios where camera moves independently of object motion, especially when camera motion is large or objects take up a large portion of the scene, it would be challenging to estimate accurate camera pose. Does the method assume that these cases do not exist in the dataset?\", \"w2\": \"Quality of 3D object generation\\nThe results of 3D object generation seem to be of comparable or worse quality than the prior state-of-the-arts both qualitatively (Figure 10) and quantitatively (Table 6). Moreover, the quantitative evaluation is incomplete since some more recent methods (Zero123XL, Magic123, SV3D, EscherNet, etc) are missing and the metric is limited to CLIP-I only, while prior works usually report metrics like LPIPS, SSIM, Chamfer Distance, 3D IoU (on 3D object datasets like Google Scanned Objects).\", \"w3\": \"Evaluation of 4D object generation\\nAgain, the quantitative evaluation for 4D object generation is limited to the CLIP-I metric and more recent methods like STAG4D and DreamGaussian4D are missing. Also, it is unclear if the metrics in Table 3 are calculated on the training (synthesized) video frames only or on densely sampled views and timestamps. Since the proposed method optimizes 4D-GS only on one camera orbit without SDS loss, I suspect that the outputs look good on these frames/views but worse than other methods in novel views.\", \"w4\": \"Small camera motion in 4D scene generation\\nAll the presented results on 4D scene generation seem to have smaller camera motion compared to results shown in prior work like MotionCtrl. Although the results in Figure 5 and supplemental video show decent temporal consistency and motion, I\\u2019m wondering if it is limited to camera trajectories without much deviation from the input view.\", \"w5\": \"Lack of results on motion strength control\\nWhile the paper emphasizes the contribution of motion strength control, there is only one example of a simple driving scene. It would be more insightful to show more diverse motion cases to understand the effeteness and limitations of it.\", \"questions\": \"Q1: Following W1, what are the assumption and failure cases of the proposed camera estimation?\", \"q2\": \"Following W3, please describe how the metric is calculated in detail for fair comparison against prior methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper introduces an approach for sequential image generation that can properly depict 3D and 4D scenes. The key idea is to investigate camera and object movements jointly, which leads to curating real-world video to make a new CamVid-30k dataset. The proposed pipeline GenXD has new multiview-temporal modules, which can disentangle camera and object movements.\", \"the_strengths_of_this_paper_are_summarized_as_follows\": [\"Sensible model design\", \"Interesting idea for motion disentangle\", \"First approach that proposes a general model for 3D and 4D generation\", \"New dataset for 4D scene reconstruction\", \"Good paper writing, easy to follow\"], \"the_weaknesses_of_this_paper_are_summarized_as_follows\": [\"Limitation of camera pose estimation - possibility of failure cases\", \"Missing references (Consistent4D and STAG4D)\", \"Minor questions regarding evaluation metrics and datasets\", \"Overall, AC confirms that the experiment is extensive, and the results are compelling in the various cases. Although the paper received divergent scores, particularly the rejection score by x1Cy, AC confirms that the merits in this paper outweigh the concerns raised by x1Cy.\"], \"additional_comments_on_reviewer_discussion\": \"This paper received diverged scores {3, 6, 8, 8}. Overall, the reviewers weigh more on the value of the pioneering attempt for 3D and 4D scene generation. AC notes that the authors provide detailed and impressive feedback on the reviewers' questions. Specifically, the authors provide a clear summary that shows the merit of the proposed approach compared with other baseline approaches that can produce image-to-4D generation, video-to-4D generation, 4D scene generation, 3D object generation, and 3D scene generation. Moreover, the authors provide an anonymous webpage to provide additional results.\\n\\nRegarding each reviewer's comment, the reviewer rGgp requested clarification on the camera pose estimation module, evaluation of the additional datasets, and evaluation with more 4D object generation tasks. The authors provide thorough evaluation results that compare the proposed approach with Zero123, Zero123-XL, EscherNet, Consistent4D, DreamGaussian4D, and STAG4D. The authors also offer a comparison with MotionCtrl and Camera Ctrl. The reviewer rGgp clarified that the additional results were convincing and increased the rating. The reviewer xZJ1 asks about missing references, evaluation metrics, data selection procedure, and missing technical details. The authors provide thorough feedback by providing additional comparisons. The reviewer rGgp was satisfied with the comments and highlighted that this paper is pioneering. The reviewer oiM6 provided a constructive review, mostly about clarification of the technical details and more results, and questions on the ablation study. The reviewer also mentioned that the authors' rebuttal clarified initial concerns and stated that this work is well-motivated and evaluation is sufficient. \\n\\nIn particular, the reviewer x1Cy provided a short initial review and asked about the misleading arguments, evaluation of alpha-fusion, and camera control accuracy. The reviewer also requested a comparison with SV4D and an unclear setting about 4D generation. AC sees that the authors provided a thorough rebuttal to this request. The reviwer x1Cy states that still the argument regarding 4D generation is still misleading and mention provided results are limited and may cherry-picked. During the Reviewer-AC discussion phase, AC requests the reviewer's opinion x1Cy again whether the rebuttal and other reviewers' comments would change the score. However, the reviewer x1Cy did not reply.\"}",
"{\"summary\": \"In the paper the authors propose two main contributions: a curated dataset for 4D generation model learning, named CamVid-30K; and a model trained to generate in 3D or 4D given arbitrary condition images, named GenXD.\\n\\nAuthors proposed a detailed pipeline on how to combine existing techniques to curate a 4D scene datasets for model training. Including instance segmentation modules for static / dynamic decomposition, static structure-from-motion to recover camera parameters and sparse depth map, relative depth align with sparse depth map for spotting the dynamic object and introducing a motion strength factor as additional condition.\\n\\nAuthors proposed a new model GenXD to train on this dataset combining with other object-centric 3D/4D datasets and 3D scene datasets. They further design a $\\\\alpha$-fusing strategy to better disentangle the spatial and temporal information in the data source. Experiments across various benchmark show impressive performance of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. The data curation pipeline for transforming existing videos into trainable 4D dataset is quite useful, and the proposed curation pipeline and the CamVid-30K should be beneficial to the field.\\n\\n2. Combining all source and sub-tasks' data (object/scene, 3D/4D) is fundamentally useful and a model trained on mixture of data should have better generalization ability. The proposed $\\\\alpha$ parameter seems can be understood as an explicit control to switch between 3D and 4D generation given same conditions.\\n\\n3. The results are promising and generally good. And extensive evaluations on multiple benchmarks show the effectiveness of the proposed method.\", \"weaknesses\": \"There are some minor errors or confusing points in the paper. I'll list some here and some in the following questions section.\\n\\n1. In L253, \\\"The keypoint $(u_i, v_i)^T$ in the $i$-th frame is first back-projected into world space to obtain the 3D keypoint $kp_i$\\\". I agree here the $kp_i$ should be in world space, but according to Eq.(3) seems it's in the camera space? From my perspective the Eq.(3) is transforming image-space coordinates to camera-space coordinates, missing the step of transforming to world coordinates.\\n\\n2. In all the figures with camera trajectory visualization, the legends and axis notations are very small and impossible to tell the actual information, also the trajectory only lies in a small region in the plot. I suggest authors remove the axis notations if they are too small, and zoom in to show the trajectory in a more detailed way.\\n\\n3. In section 5.2 4D object Generation, it seems unfair to say \\\"results in our method being $100\\\\times$ faster\\\", as the efficiency comes from using a different underlying 3D representation comparing to other methods, which are orthogonal to the proposed method. I think here using CLIP similarities for comparison is reasonable. Showing speed is fine but shouldn't be used as comparison.\\n\\n4. I think in general the paper is with good results. But according to the task of the proposed method, I expect to see more scene level 3D or 4D generation results, including larger camera trajectories and failure examples.\", \"questions\": \"1. The statement in section 3 that \\\"when the camera is moving while the object remains static, the motion strength is significantly smaller compared to videos with object motion\\\" seems not so easy to understand. I assume the authors mean that this is the common case for natural captured video where cameras are still or moving in a slow motion?\\n\\n2. Does the $\\\\alpha$ needs to be explicitly set during training / inference? For example let network itself output the weight when dealing with 4D content and explicitly set it as 0 when dealing with 3D content. If so then it would be interesting to see given same conditions (more complicated than Figure 7) what would model outputs for different $\\\\alpha$. Like given multi-frames of a static scene but telling model do 4D generation and given multi-step frames with dynamic objects and force model to do static 3D generation.\\n\\n3. It's kind of confusing that the 5.4 ablation study is for model training or just inference after training? If it's after training, than the results in table 5 is somehow not so useful as it's trained with $\\\\alpha$ but in inference time not allowed to use it, which would certainly lead to performance drop.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"1. This paper aims to jointly generate 3D and 4D objects and scenes with camera control.\\n2. This paper proposed multiview-temporal modules that disentangle camera and object movements and thus can learn from both 3D and 4D data. The proposed approach employs masked latent conditions to support a variety of conditioning views.\\n3. They construct a dataset CamVid-30K that consists of high-quality 4D data with camera poses for model training\\n4. Extensive experiments show that the proposed method can achieve comparable or better results than baselines in 3D/4D object/scene generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is the first to generate any 3D and 4D scenes with camera control and an arbitrary number of condition frames.\\n2. The proposed multiview-temporal modules with alpha-fusing enable separate multi-view and temporal information and effectively conduct both 3D and 4D generation.\\n3. The paper constructs a new dataset for 4D scene generation. The dataset and the data curation pipeline potentially benefit the following video generation with camera control and 4D generation.\\n4. The paper is well-written and easy to follow.\", \"weaknesses\": \"**Experiments**:\\n\\n1. In the experiment of 4D object generation, some relevant references and comparisons are missing, such as Consistent4D [1] and STAG4D [2]. Since these works are open-source, it would strengthen the paper to include these baselines or clarify why they are not suitable for comparison. They take single-view video as input, which should be applicable for this work.\\n2. In Table 3, it would also be beneficial to report temporal consistency metrics (e.g., FVD), as temporal consistency is critical for 4D object generation.\\n\\n**Minor Points:**\\n\\n1. Clarifying the selection process for the 44K dynamic data in Objaverse-XL would be helpful. According to Diffusion4D [Liang et al. (2024)], ~323K dynamic objects were collected. For instance, what filters were applied in this work? Will the selected dynamic objects be publicly available? Adding these details in the Appendix would enhance transparency.\\n2. Some technical details are missing: What is the maximum number of frames the model supports? Additionally, in Table 3, Zero-1-to-3 and RealFusion were originally designed for 3D reconstruction\\u2014how were they adapted for 4D generation in this work?\\n\\n[1] Jiang, Yanqin, et al. \\\"Consistent4d: Consistent 360 {\\\\deg} dynamic object generation from monocular video.\\\" ICLR 2024.\\n\\n[2] Zeng, Yifei, et al. \\\"Stag4d: Spatial-temporal anchored generative 4d gaussians.\\\" ECCV 2024.\", \"questions\": \"1. In the top case of Figure 10, the results from the proposed method appear off-center, possibly due to an inappropriate object-to-image occupancy ratio in the input images. Adjusting this ratio might improve the alignment of the results.\\n2. If the learnable fusion weight, alpha, is set to 1, would it enable video generation based on the first frame? With alpha at 1, only the outputs from the temporal modules would contribute.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1TXDtnDIsV | Learning Mamba as a Continual Learner | [
"Chongyang Zhao",
"Dong Gong"
] | Continual learning (CL) aims to efficiently learn and accumulate knowledge from a data stream with different distributions. By formulating CL as a sequence prediction task, meta-continual learning (MCL) enables to meta-learn an efficient continual learner based on the recent advanced sequence models, e.g., Transformers. Although attention-free models (e.g., Linear Transformers) can ideally match CL's essential objective and efficiency requirements, they usually perform not well in MCL. Considering that the attention-free Mamba achieves excellent performances matching Transformers' on general sequence modeling tasks, in this paper, we aim to answer a question -- Can attention-free Mamba perform well on MCL? By formulating Mamba with a selective state space model (SSM) for MCL tasks, we propose to meta-learn Mamba as a continual learner, referred to as MambaCL. By incorporating a selectivity regularization, we can effectively train MambaCL. Through comprehensive experiments across various CL tasks, we also explore how Mamba and other models perform in different MCL scenarios. Our experiments and analyses highlight the promising performance and generalization capabilities of Mamba in MCL. | [
"Continual Learning",
"Sequence Modelling"
] | Reject | https://openreview.net/pdf?id=1TXDtnDIsV | https://openreview.net/forum?id=1TXDtnDIsV | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yTRFSeYK5q",
"wkctyfHTZH",
"viKxtEgiZF",
"nMkvPdvXIz",
"nMQuBAo2sN",
"ivSmgPT9bi",
"hHSEK4Eygi",
"aMD9uydWTR",
"Yj20RWkJHu",
"Wnf84Odsqi",
"WZcttp7xGo",
"Vpb6xwnAH7",
"NybPoPWDRM",
"AdUassF1L2",
"8qadoCzGGe",
"7EKJG9grUw",
"60kfgUj3Q5",
"4Ltg1gh1Oa",
"3xHDBB4575",
"2CEx1dKhXB"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_review"
],
"note_created": [
1730007531941,
1733099423618,
1732385318035,
1732384789072,
1732863024758,
1732386931455,
1733154563051,
1732386226674,
1732387427969,
1737523373635,
1732863143668,
1732386580904,
1732863365967,
1733099513874,
1732762469464,
1730551061480,
1732762429138,
1732386762601,
1734878019708,
1731137178245
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9/Reviewer_BA8y"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Reviewer_BA8y"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Reviewer_Zeh1"
],
[
"ICLR.cc/2025/Conference/Submission9/Reviewer_bJ6d"
],
[
"ICLR.cc/2025/Conference/Submission9/Reviewer_Zeh1"
],
[
"ICLR.cc/2025/Conference/Submission9/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9/Area_Chair_d8rp"
],
[
"ICLR.cc/2025/Conference/Submission9/Reviewer_Zeh1"
]
],
"structured_content_str": [
"{\"summary\": \"The authors explore a key research question: Can the attention-free Mamba model effectively handle meta-continual learning (MCL) tasks? They reframe State Space Models (SSM) and Mamba as sequence-prediction-based continual learners, training them via meta-learning across continual learning episodes. To enhance this training, they introduce a selectivity regularization technique. Extensive experiments reveal that Mamba consistently performs well in various MCL settings, significantly surpassing other attention-free approaches and often equaling or surpassing Transformer models in performance\\u2014all while using fewer parameters and computational resources. Notably, Mamba demonstrates strong reliability, generalization, and robustness in complex scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"It is interesting to explore how Mamba performs in a meta-continual learning setting.\"], \"weaknesses\": [\"The conclusion of this paper is unsurprising, as Mamba's MCL performance aligns closely with its results on standard benchmarks.\", \"There is insufficient analysis explaining how and why Mamba outperforms other attention-free architectures and achieves comparable results to Transformer-based models.\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you very much for acknowledging the notable strengths of MambaCL, including its reliability, generalization, and robustness in complex scenarios. We also appreciate your recognition of the novelty of exploring Mamba's performance in MCL setting.\\n\\nWe deeply appreciate your time and effort in reviewing our work. We hope that our responses have been satisfactory and welcome any further discussion. Should our rebuttal sufficiently address your comments, we kindly request that you consider increasing the score. Thank you for your valuable feedback.\"}",
"{\"comment\": \"Thank you very much for your time and effort in reviewing our paper.\\n>**W1. Novelty and relationship with (Lee et al., 2024).**\\n\\n> - This work is largely based on the work of (Lee et al., 2024), which first formulates the MCL problem as a sequent modeling.\\n> - This work simply replaces Transformers of (Lee et al., 2024) with a state space model Mamba.\\n> - Except this replacement, there is little novelty as its application is rather straightforward, following (Lee et al., 2024).\\n\\nThank you for your comments.\\n- We want to emphasize the novelty of our work across three key aspects, i.e., problem formulation, analytical insights on methods, and technical contributions.\\n - Our work shares a similar meta-continual learning formulation as (Lee et al., 2024). However, we emphasize that this formulation represents a general problem deserving further investigation. Beyond the basic formulation and tasks explored in (Lee et al., 2024), our work extends these investigations to broader and more realistic scenarios, such as generalization to diverse meta-test tasks and robustness to noisy scenarios. These are novel perspectives and distinguish our approach from (Lee et al., 2024).\\n - For the methodoloy aspect, our work identifies a key gap between attention-based Transformers (which store K and V for all seen samples) and suitable methods for meta-continual learning (MCL). To address this, we focus on the studies of attention-free MCL models, which sets our work apart from (Lee et al., 2024) and provides a novel direction for MCL research. Considering the significant potential of MCL, our work expands its applicability to more realistic scenarios, making substantial contributions to the field. \\n - On technical aspect, we introduce the attention-free Mamba model tailored to the MCL formulation and propose specific techniques to ensure its effectiveness. This represents a novel contribution. Unlike (Lee et al., 2024), where attention-free methods fail to match the performance of Transformers, our work demonstrates the successful development of an attention-free model for MCL. It is non-trivial. Additionally, we conduct extensive investigations and analyses of model performance across more diverse and complex scenarios, providing further insights.\\n\\n\\n---\\n>**W2. The use of Mamba instead of Transformers leads to little performance improvement as reported in Table 1-5. The main benefit of Mamba over Transformer lies in fewer parameters and increased processing speed as shown in Table 7.** \\n\\nThank you for your comments.\\n- Firstly, we want to highlight that our motivation for investigating attention-free models (e.g., Mamba, as presented in our paper) for meta-continual learning (MCL) is rooted in its alignment with the principles of continual learning. Although Transformers have demonstrated strong performance (Lee et al., 2024), relying solely on Transformers limits the broader applicability of MCL.\\n - Unlike Transformers, which store K and V for all seen samples and require a linearly increasing hidden state size, attention-free models maintain a constant hidden state size. This aligns better with the requirements and definitions of continual learning (CL), where efficiency and scalability are critical. Our primary motivation lies in addressing these natural characteristics of CL, rather than focusing solely on performance.\\n\\n\\n- Mamba is an attention-free model. Transformers resemble RNNs superficially but are fundamentally different because they require recomputing the full attention map at each step with a complete KV-cache, which contradicts the principles of continual learning to some extent.\\n- We conducted comprehensive experiments to evaluate the effectiveness of various methods. We observed that Mamba outperforms other attention-free methods by leveraging time-variance selective modeling.\\n- While our study does not specifically aim to establish Mamba\\u2019s superiority over Transformers, Mamba, with its significantly smaller state size and greater efficiency, can avhieve performance comparable to or even exceeding that of attention-based Transformers, particularly in scenarios requiring long-term structure modeling, as detailed in the paper.\\n\\n---\\n\\n>**W3. Implementation details.**\\n\\nThank you for your comments. We have further emphasized and expanded the implementation details in both the following responses and the revised manuscript. We also commit to releasing all code upon the acceptance of the manuscript. \\n- The experimental setup and implementation details are provided in Chapter 4 (Lines 332\\u2013357). Fig. 1 illustrates the overall MCL process, while Fig. 2 focuses on the detailed Mamba block, designed following the standard Mamba structure. Additionally, Appendix B includes configurations of the various models used in our experiments. \\n- If there are any further questions or points that need clarification, please let us know, and we will do our best to address them.\", \"title\": \"Response by authors\"}",
"{\"comment\": [\"Thank you very much for your time and effort in reviewing our paper.\", \">**W1. The conclusion of this paper is unsurprising, as Mamba's MCL performance aligns closely with its results on standard benchmarks.**\", \"Thank you for your comments.\", \"We would like to kindly discuss the notable results presented in our manuscript.\", \"Unlike (Lee et al., 2024), where attention-free methods fail to match the performance of Transformers, our work demonstrates the successful development of an attention-free model for MCL. It's not strightforward or trivial to apply the Mamba model to MCL. Considering the unique characteristics of Mamba, we developed a selectivity regularization technique and applied Mamba effectively in MCL for the first time.\", \"Additionally, we conduct extensive investigations and analyses of model performance across more diverse and complex scenarios, providing further insights.\", \"Transformers resemble RNNs superficially but are fundamentally different because they require recomputing the full attention map at each step with a complete KV-cache, which contradicts the principles of continual learning to some extent.\", \"Our results demonstrate improved efficiency and comparable or better performance in specific scenarios compared to vanilla Transformers (with full KV-cache), underscoring the potential of attention-free models in MCL.\", \"The observations and conclusions from our analyses and experiments align with prior studies on Mamba in other applications (e.g., language modeling), supporting the validity of our results. Achieving these findings in the context of MCL is both non-trivial and novel.\", \"---\", \">**W2. More analysis explaining how and why Mamba outperforms other attention-free architectures and achieves comparable results to Transformer-based models?**\", \"Thank you for your comments. We have incorporated additional analysis to explain how and why Mamba outperforms better in Appendix C and D.\", \"Mamba performs better generalization to untrained stream length.\", \"The experiments shown in Figures 3a and 3b validate the generalization ability of models by meta-testing on CL episodes/sequences that differ from those seen during meta-training. Transformers generally converge more easily during meta-training compared to Mamba, due to their strong fitting ability. However, this advantage may also lead to meta-overfitting.\", \"To analyze how different models perform on these sequences, we visualize the final-layer attention weights of Transformers and the corresponding selective scores (associative indicators) of Mamba. Note that Mamba does not have explicit attention weights, we the scores relying on the connection between Mamba and Transformers described in Section 3.2.2. For models meta-trained on the 20-task, 5-shot setting, we meta-tested them and visualized their weights on 20-task, 5-shot episodes (Fig. 10), 20-task, 10-shot episodes (Fig. 11), and 40-task, 5-shot episodes (Fig. 12).\", \"Specifically, we observed that Transformers tend to either average attention or consistently focus on specific token positions in episodes that deviate from the training length. In contrast, Mamba effectively associates with relevant shots. This suggests that Transformers may learn pattern biases in the sequences (e.g., positional biases unrelated to content), leading to meta-overfitting during these generalization tests.\", \"Mamba performs better robustness to large input noise.\", \"We visualized the final layer attention weights for test shots compared to training shots for both Mamba and Transformer, each meta-trained in a 20-task, 5-shot setting. During meta-testing, these models processed a 20-task, 5-shot episode with five noisy input shots (shot index: 8, 18, 39, 61, 75) at noise strengths of 1 (Fig. 13), 2 (Fig. 14), and 6 (Fig. 15).\", \"The results indicate that Transformers meta-trained on clean episodes tend to produce extreme attention weights (either very high or very low) on noisy or outlier shots, whereas Mamba is less affected. This observation suggests that Transformers\\u2019 learned attention mechanisms tend to associate samples based on local and independent representations. In contrast, Mamba performs more effectively by selectively associating relevant information and leveraging its recurrently updated latent state, which accumulates global sequence information.\"], \"title\": \"Response by authors\"}",
"{\"title\": \"Thank you for your comments (Round 2 Part 1/3)\", \"comment\": [\"We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. However, we noticed that some misunderstandings may have influenced certain aspects of the review. We further clarify all the questions and try to address all concerns more straightforwardly.\", \"> **Q1. ...inefficiencies associated with the KV-cache and memory utilization are more of an intrinsic issue related to the Transformer architecture itself...This challenge is not isolated to the MCL setup; it is a common issue across many applications of Transformers. In fact, attention-free models like Mamba have been proposed to address these challenges more generally...**\", \"For the general purpose of the sequence model, standard Transformers have an efficiency issue, but they are still practical. However, for MCL, we want to highlight that \\u2013 although Transformers can be applied to the sequence and produce good-looking performances, they actually violate the intrinsic requirements in CL. Transformers maintain the representations (i.e., K and V) of all seen samples, which contradicts the objectives of CL.\", \"The most critical issue of Transformers in MCL is its misalignment with the motivation, not only the efficiency issue. We thus do not agree that `\\u201c... isolated to the MCL setup; it is a common issue ...\\u201d`.\", \"For the above motivation, we focus on attention-free methods, which have the formulation aligning with the requirements of MCL. However, all the tested attention-free methods in previous work (Lee et al., 2024) cannot perform well. We thus investigate `\\u201cwhether the attention-free Mamba can perform well for MCL\\u201d`, as discussed in the introduction. This is one important perspective we want to highlight, which is essential for this MCL research area and not obvious.\", \"Directly applying Mamba on MCL is not trivial due to the difference in model structure between Mamba and Transformers and the difference between MCL and other standard sequence modeling tasks. Specific techniques are contributed. Moreover, we expand the standard MCL formulation in (Lee et al., 2024) to more realistic MCL scenarios and reflect them into experimental settings, such as the generalization analysis of novel meta-testing cases, robustness analysis of noisy cases, and cross-domain scenarios. Although the success of Mamba on language-based tasks has been observed, it is not trivial and direct to achieve success on MCL. In the general experiments, we observed Mamba can align the performance of Transformers and even outperform them in some scenarios (with explainable reason aligning Mamba\\u2019s success on some specific language-based tasks), which is specific to our work.\", \"---\", \"> **Q2. ...unclear how this represents a significant departure from merely applying Mamba to the MCL context...could the authors perhaps elaborate more on the specific changes (tailoring) that were necessary for Mamba to be effectively applied to MCL?**\", \"Firstly, We want to highlight that it is not a trivial task to adapt Mamba to MCL (even more challenging than applying Transformer to MCL), especially with the background in (Lee et al., 2024) that none of the attention-free methods (which better aligns the requirement of MCL) can work well. We are the first to make Mamba or arbitrary attention-free methods work well on MCL.\", \"Secondly, we introduce the association regularization for Mamba by bridging the selective operation of Mamba with attention operation in Transformer. The attention regularization for Transformer can be straightforward. The regularization for Mamba is not obvious. We appreciate that the reviewer has recognized the contribution and novelty.\", \"Additionally, we investegated how the hidden state capacity in SSM/Mamba influences the resutls in MCL, considering that Mamba and SSM selectivgely compress the context information of seen samples in the data stream. We also explore the possibility of integration of Mixture of Experts (MoE) with Mamba to enhance learning and mixing multiple learning components.\"]}",
"{\"title\": \"Response by authors (Part 4)\", \"comment\": [\"> **Q.4 Robustness of Noise Input in Figure 3c**\", \"> How the noise is added.\", \"Thank you for pointing out this. The noise is added on the input of the model, i.e., $x_i$. In this context, $x_i$ represents the image embeddings extracted from the pre-trained CLIP model. We have revised the manuscript to clarify it, accordingly.\", \"> Could the authors potentially discuss some potential reasons behind Mamba's extreme robustness to large input noise?\", \"In the experiments, the modes are meta-trained on noise-free episodes. And the noise is added on randomly selected samples/shots in the meta-testing episodes. The task can also be seen as validating the ability of ignoring the irrelevant samples or contaminated outlier samples in the sequences.\", \"To directly show how the models work in this scenarios, we visualized the final layer attention weights for test shots compared to training shots for both Mamba and Transformer, each meta-trained in a 20-task, 5-shot setting. During meta-testing, these models processed a 20-task, 5-shot episode with five noisy input shots (shot index: 8, 18, 39, 61, 75) at noise strengths of 1 (Fig. 13), 2 (Fig. 14), and 6 (Fig. 15).\", \"The results indicate that Transformers meta-trained on clean episodes tend to produce extreme attention weights (either very high or very low) on noisy or outlier shots, whereas Mamba is less affected. This observation suggests that Transformers\\u2019 learned attention mechanisms tend to associate samples based on local and independent representations. In contrast, Mamba performs more effectively by selectively associating relevant information and leveraging its recurrently updated latent state, which accumulates global sequence information.\", \"---\", \"> **Q5. General comments on MCL**\", \"> Some important challenges in the MCL setup for continual learning include: 1) its application to long continual learning sequences, 2) the requirement for offline training datasets (meta-training), and 3) generalization to unseen long OOD meta-testing tasks. These challenges cannot be resolved simply by switching from transformers or their variants to Mamba.\", \"Thank you for the comments. The pointed challenges are also our motivation and the novel perspectives explored in our paper, distinguishing our work from (Lee et al., 2024). We aim to address these challenges from the broader perspective of extending MCL to more realistic and practical scenarios.\", \"In this study, our goal is not to resolve these challenges using a single specific model, such as Mamba.\", \"Our motivation for studying Mamba in the context of MCL is rooted in its alignment with the principles of continual learning. Unlike attention-based Transformers, an attention-free model (e.g., Linear Transformer or Mamba) does not require maintaining representations for all seen samples, making it inherently more suitable for continual learning.\", \"> Discussion of differences with MetaICL\", \"Thank you for pointing this out. We have included a discussion in the revision. MetaICL is designed for language text, whereas our tokens include images and labels. The underlying functions to be fitted ('the functions to fit') are distinct, although they share a common mathematical formulation. Compared to text sequences, the problems we address are inherently more complex, requiring the learning of more intricate functions and making the learning process more challenging.\"]}",
"{\"comment\": \"Thanks for the author's rebuttal. After reading the comments from other reviewers, the reviewer thinks that this paper needs to be further improved. I maintain my score.\"}",
"{\"title\": \"Response by authors (Part 1)\", \"comment\": \"Thank you very much for your time and effort in reviewing our paper.\\n>**W1. Novelty. Formulation relationship with (Lee et al., 2024). Technical novelty.**\\n\\nThank you very much for your comprehensive comments.\\n\\nWe want to emphasize the novelty of our work across three key aspects, i.e., problem formulation, analytical insights on methods, and technical contributions.\\n\\n- Our work shares a similar meta-continual learning formulation as (Lee et al., 2024). However, we emphasize that this formulation represents a general problem deserving further investigation. Beyond the basic formulation and tasks explored in (Lee et al., 2024), our work extends these investigations to broader and more realistic scenarios, such as generalization to diverse meta-test tasks and robustness to noisy scenarios. These are novel perspectives and distinguish our approach from (Lee et al., 2024).\\n\\n- For the methodoloy aspect, our work identifies a key gap between attention-based Transformers (which store K and V for all seen samples) and suitable methods for meta-continual learning (MCL). To address this, we focus on the studies of attention-free MCL models, which sets our work apart from (Lee et al., 2024) and provides a novel direction for MCL research. Considering the significant potential of MCL, our work expands its applicability to more realistic scenarios, making substantial contributions to the field. \\n\\n- On technical aspect, we introduce the attention-free Mamba model tailored to the MCL formulation and propose specific techniques to ensure its effectiveness. This represents a novel contribution. Unlike (Lee et al., 2024), where attention-free methods fail to match the performance of Transformers, our work demonstrates the successful development of an attention-free model for MCL. It is non-trivial. Additionally, we conduct extensive investigations and analyses of model performance across more diverse and complex scenarios, providing further insights.\\n\\nWe will address the specific concerns raised in the following points, ensuring clarity any confusion regarding our proposed techniques.\\n\\n---\\n\\n\\n>**W2. Results of Mamba comparing to Transformer; Mechanisms leading to the results; \\\"... deeper analysis is crucial, especially if the primary motivation ... is to use Mamba ... instead of transformers for the same problem settings.\\\"**\\n\\nThank you for your comments. \\n\\n- The motivation for investigating attention-free models (e.g., Mamba, as presented in our paper) for meta-continual learning (MCL) stems from their intrinsic advantages. Unlike Transformers, which store K and V for all seen samples and require a linearly increasing hidden state size, attention-free models maintain a constant hidden state size. This aligns better with the requirements and definitions of continual learning (CL), where efficiency and scalability are critical. Our primary motivation lies in addressing these natural characteristics of CL, rather than focusing solely on performance.\\n\\n\\n- Given that all attention-free methods studied in (Lee et al., 2024) fail to match the performance of Transformers, we developed an advanced Mamba model tailored for MCL to explore the potential. All experiments involving different models were conducted under the same conditions, ensuring fairness and consistency with prior methods. Mamba demonstrates superior performance compared to other attention-free methods, leveraging time-variance selective modeling. This enables it to align with or even surpass the performance of attention-based Transformers, particularly in scenarios that depend on long-term structures, as discussed in the paper. While the studies and observations in our work are novel, the results are consistent with prior studies on Mamba in other applications (e.g., language modeling).\\n\\nFurther analyses and insights are demonstrated in the following to address the suggested clarifications.\"}",
"{\"title\": \"Response by authors\", \"comment\": \"We sincerely thank all reviewers for their time and effort in reviewing our manuscript.\\nWe have considered each comment and have responded to each reviewer individually. An updated version of the manuscript has been uploaded accordingly.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Thank you for your comments (Round 2 Part 2/3)\", \"comment\": [\"> **Q3. ...I am aware one technical change being made is the regularization proposed in this paper. While the regularization technique does contribute to stabilizing the meta-training process, I am concerned that the observed results (regarding better generalization of Mamba) may largely stem from the use of the Mamba model itself rather than this algorithmic development...**\", \"1) Firstly, in principle, directly using any existing Mamba model and implementation for other tasks cannot work on our task, i.e., MCL. We can see many applications of the Mamba model (and Transformers) on different applications, which are all Mamba models (or Transformers, such as (Lee et al., 2024)). Although new techniques are not our sole contributions, considering that the reviewer has recognized and admitted our main novel techniques and contributions and we further highlighted more detailed technical novelties, we think it has been clear that our work is clearly different to existing works.\", \"2) > \\u201cThis technique appears to have very limited effectiveness on the attention-free model beyond Mamba, which still resulted in significant meta-overfitting.\\u201d\", \"Firstly, we never claimed to use this regularization loss to solve the \\u201cmeta-overfitting\\u201d issue. We think the reviewer has a misunderstanding here.\", \"Without such a regularizer, meta-training of all models is difficult and almost cannot converge to a satisfactory solution (as discussed in Sec. C.1). The regularizer is used to lead the training converge. Our main contribution is proposing this regularizer to Mamba by bridging the selective operation and attention. And we apply similar loss for all models in experiments for fair comparison. Such regularization is also effective for other models for convergence, which is also observed in (Lee). In our rebuttal, we only want to highlight that the \\u201cmeta-overfitting\\u201d (mentioned by the reviewer) is not related to this regularization loss.\", \"The \\u201cmeta-overfitting\\u201d behavior of Transformers in Fig. 3 may be mainly related to the model design. This observation can align with the analysis of Transformer and Mamba (Park et al., 2024; Garg et al., 2022; Bornschein et al., 2024).\", \"3) > \\u201cThe authors mentioned that generalization performance is relatively insensitive to the strengths of the regularization (Fig. 9 and 3). This raises questions about the role of the regularization technique in achieving the reported improvements for Mamba.\\u201d\", \"Without this regularization, the models cannot converge stably and cannot converge to satisfactory results. We show this in Fig. 6.\", \"Fig. 8 shows that the models are not sensitive to the setting of the hyperparameters in a proper and large range, which is a good property of the technique. The experiments are conducted in a standard way. If we set the hyperparameter as a very small or very large value, the performance will be changed.\", \"4) > \\u201cAs a result, would you agree that the empirical performance improvements observed in comparison to Transformers \\u2014 specifically in memory efficiency and generalization to longer input sequences \\u2014 are more likely inherent properties of the Mamba model itself, which were already highlighted in the original Mamba paper?\\u201d\", \"The good performance of the Mamba model is from the design of the Mamba model. It is also what we want to highlight in the paper: the attention-free model, like Mamba, can also perform well on MCL. It is different from the previous work (Lee et al., 2024).\", \"We do not agree that they `\\u201cwere already highlighted in the original Mamba paper\\u201d`. The results on language modeling in the original paper cannot be directly applied or extended to other domains. That is also why it has been important to investigate how to generalize different models like Mamba (Park et al., 2024) and Transformers (Garg et al., 2022; Bornschein et al., 2024) to other tasks.\", \"For MCL, (Lee et al., 2024) shows that the attention-free methods cannot perform well with a significant gap to the Transformer, although attention-free methods like Linear Transformer have been proven effective on language tasks. We provide novel insights, new techniques, and empirical results to make the attention-free method (e.g., Mamba) work on MCL. As we highlighted several times, making Mamba work well on MCL is not trivial where the essential component is the associative regularization and other details.\"]}",
"{\"title\": \"Response by authors (Part 2)\", \"comment\": [\">**Q1. Effectiveness of the Proposed Regularization Technique**\", \"> \\\"improving the meta-training stability and convergence for all models.\\\"; learning curves for all models during meta-training with and without selectivity regularization technique.\", \"Due to the complexity of the MCL task, the proposed regularization technique plays a crucial role in stabilizing and improving the training process for all models. To highlight its impact, we have added meta-training loss curves in Fig. 6 of the revised manuscript, showing the initial 2500 steps for different models with and without selectivity regularization. The results indicate that models without our regularization struggle to converge and exhibit significant oscillations during training, highlighting the effectiveness of the regularization.\", \"> Regularization strength ablation study across multiple models.\", \"Applying regularization to Transformers or Linear Transformers is straightforward, as it involves direct attention map adjustments. However, for Mamba\\u2019s selective modeling, which implicitly bridges it with Transformer architectures, this process is more complex. Thus, the ablation study in Fig. 4 focuses primarily on the Mamba model.\", \"To address the reviewer\\u2019s concern, we have expanded the revised manuscript to include the sensitive study of the regularization strength ($\\\\lambda$) for different models, as shown in Appendix C.2 (Fig. 7). The results demonstrate that all models exhibit stability within a wide and appropriate range of $\\\\lambda$, providing evidence of consistent patterns.\", \"---\", \"> **Q2. Experiment Implementation Details**\", \">Why using same hyperparameters?\", \"Thank you for pointing out this.\", \"During the initial phase of this project, we started by experimenting using the same learning rate hyperparameter as in previous works. Through experimentation with different learning rates, we observed that both our model and others (under the same setting, tasks, and input types) were largely insensitive to this parameter across a wide range. To ensure a fair comparison, we therefore adopted the same learning rate as used in (Lee et al., 2024) for the compared methods, by default. We acknowledge that the term \\\"following\\\" is confusing and have corrected this in the revision. We have added more implementation details in the revised version of the manuscript. We commit to releasing all code upon the acceptance of the manuscript.\", \"Furthermore, we have included Fig. 8 in Appendix C.3 that illustrates the performance of the models using various initial learning rates $\\\\{5\\\\times10^{-5}, 1\\\\times10^{-4}, 2\\\\times10^{-4}, 5\\\\times10^{-4}\\\\}$ on both ImageNet-1K and CIFAR-100 datasets in Fig. 8. The results indicate that within a reasonable range, the learning rate does not significantly affect model performance. In our experiments, we set the initial learning rate to $1\\\\times10^{-4}$, with decays of 0.5 every 10,000 steps.\", \"> Are the hyperparameters adjusted?\", \"As discussed above, we observed that the models\\u2019 behavior was largely insensitive to these hyperparameters. Consequently, we did not perform extensive optimization or search for optimal hyperparameter settings. Instead, we adhered to the experimental settings outlined by Lee et al. (2024) by default. As demonstrated in the hyperparameter sensitivity analyses (e.g., $\\\\lambda$ in Fig. 7 and learning rate in Fig. 8 the chosen hyperparameter settings do not affect the results. We acknowledge that the hyperparameter settings also do not influence the conclusions.\"]}",
"{\"title\": \"Thank you for your comments (Round 2 Part 3/3)\", \"comment\": \"> **Q4. More on this regularization technique. I agree that it stabilizes meta-training, but I am still not sure if I understood the rationale behind applying this technique to all models, as this would prevent us from seeing how different models behave intrinsically.\\nInitially, I thought meta-training was impossible without this technique, but it seems that (Lee et.al, 2024) managed to produce meaningful results without this technique. Although the meta-training losses, in the new Figures, showed more oscillation, they still showed a clear decreasing trend indicating convergence...**\\n\\n1) If without such kind of regularizations, all models cannot converge to a reasonable solution. Fig. 6 showing the initial training phases (2500 steps) for different models with and without selectivity regularization. The losses are 3\\u20135 times higher compared to the models with regularization applied and successfully converging. Beyond 2500 steps, the losses oscillate and no longer decrease. `\\u201cmeta-training was impossible without this technique\\u201d` \\u2013 it is correct. \\n2) - Note that the reported results of Transformers and Linear Transformers in the paper of (Lee et al., 2024) also rely on such kind of regularization. The Transformer implementation of (Lee et al., 2024) cannot produce reasonable results without such kind of regularization. \\n - It is straightforward to regularize the attention map of Transformers. But there is no explicit attention or association process in SSM-based Mamba, our novelty is mainly on proposing the regularization for our Mamba model relying on bridging Mamba/SSM and Transformer. \\n - We do not want to highlight the effectiveness of such regularization on other models. We mentioned the performance of the regularization on other models to address your concerns related to the effects or this regularization on others. And we only highlight that such kind of regularization is used for all models for fair comparison in implementation. The main information to deliver is only that (a) the regularization helps the meta-training of different models (consistent with the discussions of (Lee et al., 2024)), (b) the performance issues of Transformers (such as \\u201cmeta-overfitting\\u201d) are not caused by this regularization. We apologize for the potential confusion may be caused by the wording in the rebuttal. \\n\\n\\n\\n---\\n> **Q5. I have some reservations about the work provides a novel direction for MCL in the use of attention-free methods. As previously mentioned, the improvements attributed to the empirical improvements over transformers seem to be closely tied to the specific characteristics of the Mamba model. The significant meta-overfitting observed in other attention-free methods suggests that the broader applicability of these models for MCL may be limited by the specific configurations of your meta-training setup.**\\n\\nWe want to clarify and highlight the motivation and the rationale again. We focus on attention-free models for MCL not only for their efficiency or effectiveness on performance. \\n1) MCL is studied to meta-learn continual learners for CL without the need to maintain all previously seen methods. \\n2) Transformer needs to maintain the K and V of all seen samples. Despite good numerical performances reported in (Lee et al., 2024), Transformer is not an ideal or proper choice for MCL, due to the misalignment with the objective of MCL. \\n3) We highlight this issue of the Transformer and propose to focus on attention-free methods, which do not need to save representations of all samples. We focus on the direction of using attention-free models for MCL because their design aligns with the definition of MCL better. \\n4) Although attention-free models fit the objective of MCL better, all attention-free models cannot work well as reported by (Lee et al., 2024) (and also as you can recognize). Note that other attention-free models can perform well on many general language-based applications, which is inconsistent with the unsatisfactory performances on MCL. (It can also show that the effectiveness of a model on language-based applications does not naturally mean effectiveness on MCL.) We thus focus on studies of model Mamba on MCL and show that an attention-free model can also perform well on MCL in more scenarios. This observation is novel compared to previous work (Lee et al., 2024).\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you very much for recognizing the interesting results and the strengths of Mamba in MCL, including its generalization capabilities and robustness across various benchmarks. We appreciate your acknowledgment of the proposal of MambaCL as a strong sequential approach and the extension to MambaCL-MoE.\\n\\nWe deeply appreciate your time and effort in reviewing our work. We hope that our responses have been satisfactory and welcome any further discussion. Should our rebuttal sufficiently address your comments, we kindly request that you consider increasing the score. Thank you for your valuable feedback.\"}",
"{\"title\": \"Thank you for your detailed response (part 2/2)\", \"comment\": \"I genuinely value your participation in the rebuttal and discussion period. However, at the current stage, some of my concerns remain. Please see details as follows.\\n\\n>our work identifies a key gap between attention-based Transformers (which store K and V for all seen samples) and suitable methods for meta-continual learning (MCL).\\n\\nI agree the authors have clearly explained these issues for MCL. But I would argue that the inefficiencies associated with the KV-cache and memory utilization are more of an intrinsic issue related to the Transformer architecture itself\\u2014specifically, the softmax attention mechanism. This challenge is not isolated to the MCL setup; it is a common issue across many applications of Transformers. In fact, attention-free models like Mamba have been proposed to address these challenges more generally. I would appreciate your thoughts on this perspective.\\n\\n\\n>On technical aspect, we introduce the attention-free Mamba model tailored to the MCL formulation and propose specific techniques to ensure its effectiveness. This represents a novel contribution. \\n\\nIn addition to the above, more importantly though, it remains unclear how this represents a significant departure from merely applying Mamba to the MCL context.\\n\\nThe authors mentioned *\\\"introduce the attention-free Mamba model tailored to the MCL formulation and propose specific techniques to ensure its effectiveness\\\"*, could the authors perhaps elaborate more on the specific changes (tailoring) that were necessary for Mamba to be effectively applied to MCL (besides the regularization technique which we can discuss below)? I think this would be particularly helpful in illustrating these points.\\n\\n>Unlike (Lee et al., 2024), where attention-free methods fail to match the performance of Transformers, our work demonstrates the successful development of an attention-free model for MCL... \\n>\\n>... such as generalization to diverse meta-test tasks and robustness to noisy scenarios. These are novel perspectives and distinguish our approach from\\n\\nOff-course, I am aware one technical change being made is the regularization proposed in this paper.\\n\\nWhile the regularization technique does contribute to stabilizing the meta-training process, I am concerned that the observed results (regarding better generalization of Mamba) may largely stem from the use of the Mamba model itself rather than this algorithmic development. Specifically, my reasoning includes:\\n- This technique appears to have very limited effectiveness on the attention-free model beyond Mamba, which still resulted in significant meta-overfitting. \\n- The authors mentioned that generalization performance is relatively insensitive to the strengths of the regularization (fig 9,3). This raises questions about the role of the regularization technique in achieving the reported improvements for Mamba.\\n\\nAs a results, would you agree that the empirical performance improvements observed in comparison to Transformers \\u2014 specifically in memory efficiency and generalization to longer input sequences \\u2014 are more likely inherent properties of the Mamba model itself, which were already highlighted in the original Mamba paper?\\n\\n\\n>Due to the complexity of the MCL task, the proposed regularization technique plays a crucial role in stabilizing and improving the training process for all models.\\n\\nMore on this regularization technique. I agree that it stabilizes meta-training, but I am still not sure if I understood the rationale behind applying this technique to all models, as this would prevent us from seeing how different models behave intrinsically.\\n\\nInitially, I thought meta-training was impossible without this technique, but it seems that Lee et.al 2024 managed to produce meaningful results without this technique. Although the meta-training losses, in the new Figures, showed more oscillation, they still showed a clear decreasing trend indicating convergence. I do not require additional experiments on this point, but I would appreciate your insights.\\n\\n>... We aim to address these challenges from the broader perspective of extending MCL to more realistic and practical scenarios...\\n> In this study, our goal is not to resolve these challenges using a single specific model, such as Mamba.\\n\\n> ... we focus on the studies of attention-free MCL models, which sets our work apart from (Lee et al., 2024) and provides a novel direction for MCL research.\\n\\nI have some reservations about the work provides a novel direction for MCL in the use of attention-free methods. As previously mentioned, the improvements attributed to the empirical improvements over transformers seem to be closely tied to the specific characteristics of the Mamba model. The significant meta-overfitting observed in other attention-free methods suggests that the broader applicability of these models for MCL may be limited by the specific configurations of your meta-training setup.\"}",
"{\"summary\": \"This work addresses meta-continual learning using a state space model Mamba. It performs comprehensive experiments across various CL benchmarks and reports several interesting results, including comparison with Transformers and extension to Mamba mixture-of-experts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"1. It proposes MambaCL as a strong sequential approach to meta-continual learning.\", \"2. It performs thorough experiments and discover multiple interesting observations.\", \"The use of Mamba may be more helpful for generalization over Transformers as discussed in Fig.3.\", \"MambaCL is particularly effective in on fine-grained recognition tasks as shown in Table 3.\", \"Integration of Mamba with MoE improves the MCL performance as reported in Table 6.\"], \"weaknesses\": [\"1. The technical novelty is limited.\", \"This work is largely based on the work of (Lee et al., 2024), which first formulates the MCL problem as a sequent modeling.\", \"This work simply replaces Transformers of (Lee et al., 2024) with a state space model Mamba.\", \"Except this replacement, there is little novelty as its application is rather straightforward, following (Lee et al., 2024).\", \"2. The use of Mamba instead of Transformers leads to little performance improvement as reported in Table 1-5.\", \"The main benefit of Mamba over Transformer lies in fewer parameters and increased processing speed as shown in Table 7.\", \"3. Implementation details are missing.\", \"Appendix is too sketchy to fully understand how the MambaCL is implemented.\", \"The code is not provided.\"], \"questions\": \"Please see the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your detailed response (part 1/2)\", \"comment\": \"Dear Authors,\\n\\nI apologize for the delayed response.\\n\\nI sincerely appreciate the time and effort the authors have dedicated to providing a detailed response to my review and making revisions to the original manuscript. I would like to highlight a few positive aspects:\\n- Thank you for providing additional details regarding noise perturbation, hyperparameters, and experimental settings.\\n- I recognize the effectiveness of the proposed regularization technique in stabilizing meta-training.\\n- I appreciate the additional visualizations of attention scores for the models.\"}",
"{\"title\": \"Response by authors (Part 3)\", \"comment\": [\"> **Q.3 Meta-Overfitting illustrated in Figures 3a and 3b**\", \"> More insights for why Mamba performs better on this experiment.\", \"The experiments shown in Figures 3a and 3b validate the generalization ability of models by meta-testing on CL episodes/sequences that differ from those seen during meta-training. Specifically, the models are meta-trained on \\u201c20-task, 5-shot\\u201d MCL episodes and meta-tested on episodes with task and shot numbers exceeding those in meta-training. It is also a novel perspective we want to investigate in the paper.\", \"Transformers generally converge more easily during meta-training compared to Mamba, due to their strong fitting ability. However, this advantage may also lead to meta-overfitting.\", \"To analyze how different models perform on these sequences, we visualize the final-layer attention weights of Transformers and the corresponding selective scores (associative indicators) of Mamba. Note that Mamba does not have explicit attention weights, we the scores relying on the connection between Mamba and Transformers described in Section 3.2.2. For models meta-trained on the 20-task, 5-shot setting, we meta-tested them and visualized their weights on 20-task, 5-shot episodes (Fig. 10), 20-task, 10-shot episodes (Fig. 11), and 40-task, 5-shot episodes (Fig. 12).\", \"Specifically, we observed that Transformers tend to either average attention or consistently focus on specific token positions in episodes that deviate from the training length. In contrast, Mamba effectively associates with relevant shots. This suggests that Transformers may learn pattern biases in the sequences (e.g., positional biases unrelated to content), leading to meta-overfitting during these generalization tests.\", \"> Does the setting of hyperparameters (learning rate) affect meta-overfitting?\", \"As discussed above, Fig. 8 illustrates that the models maintain robustness across a reasonable range of learning rates. In practice, we observe that models with different learning rates encounter similar behaviours.\", \"> Does selectivity regularization affect meta-overfitting?\", \"Without the regularization, models struggle to converge and exhibit significant oscillations during training, as shown in Fig. 6. Therefore, we evaluated various models with a small regularization strength (0.1) to assess the impact of regularization on this generalization experiment and the meta-overfitting issue. The results indicate that regularization strengths of 0.1 (Fig. 9) and 0.5 (Fig. 3) lead to similar phenomena across different models. This experiment and the $\\\\lambda$ sensitivity analysis (Fig. 7) show that the results are not influenced by the hyperparameter setting.\", \"> \\\"... transformers and their variants in Figures 3a and 3b is somewhat surprising. Specifically, in Figure 3b, adding more training shots per class even, and almost monotonically, decreased the classification accuracy on the queries.\\\"\", \"The continual learning ability of the MCL models is given by the meta-training. Although more samples an episode/sequence contain more information, the models can perform significantly worse given the episodes different to those seen in meta-training. The models in this exepriment (e.g., Transformers) are trained using a 5-shot pattern, and providing additional shots might lead the model to mistakenly perceive them as belonging to different tasks potentially leading to overfitting (given that Transformers may learn to associate every 5 shots with each task indicated by the position encoding).\"]}",
"{\"metareview\": \"The paper builds on the meta continual learning (MCL) framework proposed by Lee et al., 2024, by introducing Mamba as the sequential model to replace transformers or attention-free variants. This substitution aims to reduce computational costs while maintaining competitive performance. Furthermore, the authors propose a selective regularization technique during meta-training to strengthen the connection between query tokens and previously correlated input tokens. Experimental results indicate that Mamba achieves comparable generalization and robustness to Transformer variants in MCL, while requiring less memory during inference.\\n\\nThe paper\\u2019s strengths include its focus on an important research question. However, the work suffers from significant weaknesses, including limited technical novelty and marginal contributions to advancing the field.\\n\\nGiven these limitations, I recommend rejecting this submission.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion period, Reviewers Zeh1 and BA8y actively engaged with the authors, while Reviewer bJ6d acknowledged reading the responses.\\n\\nAfter carefully reviewing all the concerns raised by the reviewers, I found that the authors failed to adequately address several critical issues, including key points highlighted by Reviewer bJ6d.\\n\\nA central issue raised by all reviewers pertains to the lack of technical novelty in the proposed method. Specifically, the reviewers were unsurprised by the effectiveness of replacing Transformers in (Lee et al., 2023) with Mamba, given (1) Mamba\\u2019s established effectiveness in sequence modeling and (2) the recasting of MCL as sequence modeling in (Lee et al., 2023). Unfortunately, the authors did not provide a convincing response to this concern.\\n\\nOne of the primary concerns from Reviewer Zeh1 relates to the confounding effect of the proposed regularization. Notably, Reviewer Zeh1 made an effort to help the authors articulate their key technical novelty, suggesting that it should go beyond the straightforward application of Mamba to (Lee et al., 2023). While the authors argued that this application is non-trivial, their claim appears to rely heavily on the proposed regularization. However, the authors themselves acknowledged that \\\"such regularization is also effective for other models for convergence, as observed in (Lee),\\\" which further undermines its originality.\\n\\nThese unresolved concerns significantly restrict the scope and potential impact of this work, limiting its appeal to the broader community. As such, I believe it does not meet the high standards required for acceptance at the prestigious ICLR conference.\"}",
"{\"summary\": \"The paper follows the meta continual learning (MCL) framework as outlined by Lee et al., 2024. The authors meta-train sequential models on offline meta-training sequences to enhance their sequence modelling capability. The authors propose using Mamba as the sequential model instead of transformers or attention-free variants to alleviate high computational costs while still achieving satisfactory performance. Additionally, the authors introduce a selective regularization technique for meta-training, which enhances the association between query tokens and previously correlated input tokens. Experimental results demonstrate that Mamba achieves improved generalization and robustness compared to transformer variants in MCL, while using less memory for inference.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-structured and easy to follow.\", \"The authors clearly explained the issue of increased compute complexity with using transformers for MCL.\"], \"weaknesses\": \"In general:\\n\\n- The paper shows limited novelty. The problem formulation, specifically the recasting of the continual learning problem as a sequential modelling problem in recurrent models, mirrors the previous work by Lee et al., 2024. From the technical side, the authors propose a new selective regularization technique for meta-training and claim it improves training stability and convergence. While the technique itself is novel, there are several questionable aspects regarding this technique and the authors' claims. I cannot fully credit the novelty of this technique until these issues are addressed.\\n\\n- Although the authors claim better generalization and robustness when using Mamba instead of transformers based on empirical results, these results appear somewhat questionable. Furthermore, there is a lack of new insights and detailed analysis; for instance, the authors did not delve deeper into the underlying mechanisms that led to these results. This deeper analysis is crucial, especially if the primary motivation of the paper is to use Mamba (or any different model architecture) instead of transformers for the same problem settings.\\n\\nPlease kindly refer to the questions for more details.\", \"questions\": [\"I am open to discussion and willing to reconsider my score if my major concerns can be adequately addressed.\", \"**Claims on the Effectiveness of the Proposed Regularization Technique**\", \"For example, lines 326-329 state:\", \"> We apply this regularization to MambaCL and other sequence prediction models (weighted by a scalar \\u03bb) together with the MCL objective in Eq. (7), which improves the meta-training stability and convergence for all models.\", \"The authors do not fully support their claims about \\\"improving the meta-training stability and convergence for all models.\\\" Specifically, there are no experiments showing learning curves (or similar alternatives) for all models during meta-training to compare results with and without this technique.\", \"A seemingly related empirical evidence is presented in Figure 4. However, the results appear to pertain to a *single* model, and it is unclear, based on the figure caption and the text in lines 481-485, which specific model (i.e., Mamba, transformers) was used in this ablation study. Although the experiment demonstrates the sensitivity of meta-testing performance to the regularization strength, it lacks comprehensive evidence across multiple models to support the authors claim.\", \"**Experiment Implementation Details**\", \"In the paper, it is mentioned:\", \"> Following Lee et al., 2024, we set the initial learning rate to 1 \\u00d7 10\\u207b\\u2074...\", \"Cloud the authors please provide some motivations for using the same hyperparameters as in Lee et al., 2024, given that the meta-training setups differ? Specifically, the authors used a pre-trained CLIP backbone as a visual encoder and included the proposed regularization loss across all models.\", \"Moreover, were these hyperparameters adjusted for different model architectures based on some meta-validation sets, e.g., for linear transformers and Mamba? If not, wouldn't using fixed hyperparameters for all experiments and models potentially lead to sub-optimal results? If these hyperparameters are not optimal for every models, this could produce misleading results and potentially invalidate the observations.\", \"**Meta-Overfitting in Figures 3a and 3b**\", \"The authors observed that transformers and their variants seem to suffer from severe meta-overfitting based on the results in Figures 3a and 3b. However, the potential underlying causes for this overfitting are quite unclear. Specifically:\", \"As previously mentioned, based on the current description of the implementation details, it's unclear whether this overfitting is due to the use of improper hyperparameters, such as learning rates.\", \"Additionally, it is undetermined whether this overfitting is influenced by the use of regularization terms for all models during meta-training. Would removing this regularization loss for transformers significantly reduce meta-overfitting?\", \"Could the authors please provide some insights into why Mamba did not suffer from the same degree of overfitting?\", \"While the occurrence of meta-overfitting is expected, the degree of overfitting\\u2014particularly in relation to the number of training tasks and training shots used in meta-training\\u2014exhibited by transformers and their variants in Figures 3a and 3b is somewhat surprising. Specifically, in Figure 3b, adding more training shots per class even, and almost monotonically, decreased the classification accuracy on the queries.\", \"**Robustness in Figure 3c**\", \"It is somewhat unclear how the authors performed the input noise perturbation. Specifically, what does $ x_i$ in line 473 refer to? Is it the original input image to the CLIP encoder, or the extracted image embeddings that serve as inputs to the sequential learning models?\", \"I find it very interesting that Mamba exhibits excellent robustness to input noise, even with a standard deviation as large as 10. Could the authors potentially discuss some potential reasons behind Mamba's extreme robustness to large input noise?\", \"**General Comments on MCL**\", \"Some important challenges in the MCL setup for continual learning include: 1) its application to long continual learning sequences, 2) the requirement for offline training datasets (meta-training), and 3) generalization to unseen long OOD meta-testing tasks. These challenges cannot be resolved simply by switching from transformers or their variants to Mamba.\", \"Are there any differences on the problem formulation and the meta-training setups between the ones in the paper and the one in MetaICL: Learning to Learn In Context, Min et al., NAACL 2022?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1TJSnL3ywS | LLM Distillation for Efficient Few-Shot Multiple Choice Question Answering | [
"Joan Santoso",
"Patrick Sutanto",
"Esther Irawati Setiawan"
] | Multiple Choice Question Answering (MCQA) is an important problem with numerous real-world applications, such as medicine, law, and education. The high cost of building MCQA datasets makes few-shot learning pivotal in this domain. While Large Language Models (LLMs) can enable few-shot learning, their direct application in real-world scenarios is often hindered by their high computational cost. To address this challenge, we propose a simple yet effective approach that uses LLMs for data generation and scoring. Our approach utilizes LLMs to create MCQA data which contains questions and choices, and to assign probability scores to the generated choices. We then use the generated data and LLM-assigned scores to finetune a smaller and more efficient encoder-only model, DeBERTa-v3-base by leveraging distillation loss. Extensive experiments on the Massive Multitask Language Understanding (MMLU) benchmark demonstrate that our method improves accuracy from 28.9\% to 39.3\%, representing a gain of over 10\% compared to a baseline finetuned directly on 5-shot examples. This shows the effectiveness of LLM-driven data generation and knowledge distillation for few-shot MCQA. | [
"Few-shot learning",
"Multiple Choice Question Answering (MCQA)",
"Data generation",
"Knowledge distillation",
"Multiple Choice Question Answering (MCQA)"
] | https://openreview.net/pdf?id=1TJSnL3ywS | https://openreview.net/forum?id=1TJSnL3ywS | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zNz3XXEkaZ",
"zHkEGElBLm",
"yaq1EAojcz",
"xdABltD5Jo",
"wb33yasIAx",
"sP8uRvUTpE",
"quTEFgSBkT",
"piPm6bGw8h",
"kWbqjrBhzE",
"kWDq70aEFc",
"i7hPwzZqfI",
"gf6Wx8I0mO",
"djWG57n0ec",
"d0ZysHMBp8",
"arOV5PPKYg",
"aQGD83S4S0",
"ZqtEZ8YfMr",
"ZLjZaNWMHu",
"XRCav6aVyF",
"OmoDmiWzmu",
"O1mWa7hLb0",
"Lo89RqGmwa",
"K7EXWdAEVz",
"HrN4AKCK3K",
"FxuuxDQSzP",
"FgHfYjqTXm",
"BSWS0zO0JD",
"AG7CgiRQ0u",
"96UsnSE8DZ",
"7LT26jwwdG",
"4WXnIEUyYU"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1733057808858,
1732177044980,
1730547584053,
1732178477684,
1731083798074,
1733058014702,
1732177948769,
1734058947230,
1733226521360,
1729113860595,
1732176946987,
1733057780142,
1733057983854,
1732178393186,
1732178145209,
1730536015727,
1732178290304,
1733131163477,
1733118418356,
1732901330598,
1732177527624,
1732532614395,
1732177310547,
1733190396009,
1733131104049,
1732177721636,
1733057943334,
1732177889794,
1732901277456,
1733200754360,
1730645192176
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_snni"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_U1vS"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_U1vS"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_dTam"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_mtLv"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_snni"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_snni"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_snni"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3797/Reviewer_cyRj"
]
],
"structured_content_str": [
"{\"title\": \"Rebuttal Feedback Request for Reviewer cyRj\", \"comment\": \"Dear Reviewers cyRj\\n\\nA gentle reminder regarding the rebuttal for paper 3797. The rebuttal period is ending soon, and we haven't yet received feedback from you. Your insights are greatly appreciated. We would be grateful if you could take a look at your earliest convenience. Thank you.\"}",
"{\"title\": \"Response to Reviewer U1vS(Part 2)\", \"comment\": \"> W3. The model does not outperform Tasksource model which is obtained by the multi-task training of the same backbone: the improvement on MMLU is marginal (+0.5), and on ARC data the proposed approach works significantly worse.\\n\\nWhile the average improvement of 0.5 on MMLU appears marginal, it's important to consider that this is an aggregated result across 57 diverse datasets. Furthermore, the improvement on certain subsets, such as STEM (+1.6), is more substantial. For the ARC tasks, fine-tuning the Tasksource model with our JSON-distilled data yields significant gains, which we shown on table below.\\n\\n\\n| Method \\t\\t | ARC-Easy | ARC-Challenge \\t|\\n|---------------------------|----------|----------------|\\n| Tasksource \\t\\t | 72.8 | 51.2 \\t\\t|\\n| Decompose distill \\t | 67.8 | 45.3 \\t\\t|\\n| JSON distill \\t\\t | 69.8 | 48.6 \\t\\t|\\n| Tasksource + JSON distill | 74.5 | 54.7 \\t\\t|\\n \\n\\nWe achieve an accuracy of 74.5% on ARC-Easy and 54.7% on ARC-Challenge. The JSON-distill approach, even without leveraging Tasksource's multi-task training, achieves performance close to the Tasksource baseline, highlighting the effectiveness of our data generation and distillation method. Our method achieves this performance using only a limited number of initial examples for data generation, whereas Tasksource benefits from extensive multi-task training. This demonstrates the efficiency of our method in leveraging limited real-world data.\\n\\n> Q1. Do you have any idea, if the huge improvement of the performance of DeBerta after distillation is related to improving of the model's question answering ability, or just due to learning of Multiple Choice QA format?\\n\\nTo investigate whether the performance improvement comes solely from learning the MCQA format or also from improved question answering ability, we conducted the following experiment. We use the LLaMa-3.1-8B-Instruct generated 1024 ARC Easy examples with JSON data generation methods. We then trained a DeBERTa-v3-base model on this generated ARC Easy data with distillation. We compared its performance on MMLU with a model trained directly on MMLU generated data with distillation and the 5-shot baseline.\\n\\n|Method \\t\\t | STEM | Social Science | Humanities | Other | Average |\\n|--------------------------|------|----------------|------------|-------|---------|\\n|Trained on MMLU generated | 32.5 | 43.2 \\t | 44.3 \\t| 40.6 | 39.3 | \\n|Trained on Arc-e 5-shot | 22.0 | 22.8 \\t | 21.9 \\t| 22.5 | 22.3 | \\n|Trained on Arc-e generated| 32.3 | 40.5 \\t | 41.4\\t| 40.3 | 37.9 | \\n\\nTraining on the ARC Easy generated data significantly improves performance over the 5-shot baseline. However, the model trained on MMLU generated data still performs better, achieving an average accuracy of 39.3%. This gap suggests that our method is not merely teaching the model the MCQA format, but is also enabling it to acquire task-specific knowledge relevant to the MMLU datasets. Therefore, we conclude that the improvements observed from our method stem from both an improved understanding of the MCQA format and, more importantly, an enhanced ability to answer questions within the specific domains covered by MMLU.\"}",
"{\"summary\": \"This paper studies the possibility of encoder model with LLM-generated dataset and knowledge distillation. To address current effortful MCQA benchmark making, this paper utilizes LLM\\u2019s ability in few shot prompting with two formatting strategies. Then, by distilling the loss of bigger LLM into small, encoder-only model, the paper shows the efficient way to achieve performance nearing that of bigger LLM.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"This paper sheds light again on the encoder-only model, which had been receiving less attention recently.\", \"The methodology's adaptability to existing domain-specific benchmarks suggests its potential for broad application across diverse fields.\"], \"weaknesses\": \"**[Novelty is limited]**\\n\\nThe paper\\u2019s novelty appears limited, as it does not introduce a new dataset and relies primarily on formatting prompts either in full JSON format or as segmented parts, raising questions on whether these methods constitute a genuinely novel approach. Furthermore, the distillation technique applied here does not seem particularly innovative, as it essentially reduces to a form of fine-tuning.\\n\\n**[Using Encoder-only Models - limited Experimental setups]**\\n\\nAdditionally, while the paper suggests the encoder-only model\\u2019s powerful capabilities, this claim is primarily based on improvements from distillation and model size reduction. These factors alone may not suffice to substantiate the model\\u2019s claimed \\\"power\\\" without more substantial baseline comparisons, particularly in tasks beyond fine-tuning.\\n\\n**[Inadequate analysis of suggested method]**\\n\\nThere is inadequate validation of the quality of the LLM-generated dataset, which raises further concerns about the reliability and applicability of the findings.\", \"questions\": [\"Has there been any comparison with recently released lightweight models, such as those with a 1B parameter size?\", \"Is there a specific reason why only the DeBERTa encoder model was tested?\", \"Was there a particular reason for employing few-shot learning in an encoder model instead of using a masked language model (MLM)?\", \"Does this paper really aligns to the ICLR conference is questionable. Any other natural language processing conference seems more suitable.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer dTam(Part 3)\", \"comment\": \"> W4.b. A paper definitely doesn't need to be the best \\\"in practice\\\" option to be useful, as it might provide surprising/intuitive insights compared to previous work. However, I don\\u2019t find the results of this paper particularly surprising in light of past work. Knowledge distillation is already widely used with LMs, and distillation from larger LLMs to smaller LLMs is done all the time with good results. Synthetic data generation with LLMs is also frequently done, and has been shown to work well. That more synthetic data works better is as expected.\\n\\nOur findings offer several valuable insights. The contrast between the JSON and Decompose methods reveals a key trade-off in LLM-based data generation: structured formats like JSON can improve data quality but introduce parsing challenges and reduce efficiency, while unstructured generation is more efficient but prone to noise. The surprisingly comparable performance of Gemma-2b-it and LLaMa-3.1-8B-Instruct for generating ARC datasets suggests smaller LLMs can be effective for data generation, potentially reducing computational costs. Our direct comparison of constrained (JSON) and unconstrained (Decompose) generation contributes to a better understanding of the impact of formatting on LLM-generated data quality, an area not extensively explored in prior work, especially for few-shot MCQA.\\n\\nFurthermore, while data generation and distillation are individually well-established, their combination for few-shot MCQA with encoder-only models is less explored. Most prior work focuses on distilling LLMs into other LLMs (decoder-only models). Our work addresses the unique challenges of distilling into encoder-only architectures, showing that the combined approach yields substantial performance gains. Our experiments demonstrate a mutual benefit, where the combined approach achieves improvement, exceeding the individual improvements from data generation and distillation alone. This framework can be generalized to other NLP tasks like classification, information retrieval, and even to vision tasks with Vision LLMs(VLLMs), demonstrating its broader potential\\n\\n\\n> Q1. On line 245, why is it \\u201capproximately 4000 MCQA examples\\u201d? Shouldn\\u2019t this be exact?\\n\\nWe apologize for the confusion, in this sentence, we only want to clarify the setting used for training the DeBERTa model which uses a batch size of 8 with 500 iterations, which means that the method is trained with 4000 examples(with some duplication). We will remove the sentence in the revision to avoid confusion.\\n\\n> Q2. Why was number of negative examples set to 5 when MMLU and ARC only have 3?\\n\\nWe used 5 negative examples for all experiments unless otherwise stated in the paper. We chose 5 to ensure a consistent experimental setup across different datasets.\\n\\n> Q3. What temperature was used for the MMLU experiments?\\n\\nAll experiments in the main paper use the temperature of 2, except when explicitly mentioned. We will add this information to the revision.\\n\\n\\n> Q4. In Section 3.2, how is a sequence is being transformed into a scalar \\u2014 [CLS] token? Pooling?\\n\\nWe pool the sentence using the representation of the last layer of the [CLS] token because DeBERTa has a [CLS] token and is also used on the DeBERTa paper for a lot of tasks.\\n\\n> Q5. From Section 3.2 it is my understanding that handling a MCQ with n options requires n forward passes. Is that correct?\\n\\nYes, for each option we concat it with the question and perform forward pass independently, which results in n forward passes. \\n\\n> Q6. How was inference done for the baseline models?\\n\\nFor Tasksource, we used the zero-shot classification pipeline from Hugging Face (sileod/deberta-v3-base-tasksource-nli). It's important to note that this pipeline also performs multiple forward passes, one for each choice, similar to our proposed method. The pipeline treats each question-choice pair as a premise-hypothesis pair in a natural language inference (NLI) task and uses the logit for entailment to determine the likelihood of each choice being the correct answer.\\n\\n> Q7. When JSON samples are not properly formatted, are they resampled, or are less than 1024 samples used?\\n\\nWhenever possible, we try to resample until we can obtain 1024 examples. However, on some datasets, we find this very hard to do, as it takes a very long time to generate the examples, as it has a low parse rate. In this case, we include some datasets which are exceptions in Table 8 in the appendix. This highlights the trade-off between data quality (ensuring 1024 samples) and generation time. In some cases, the time required to generate additional samples to reach 1024 becomes prohibitive.\"}",
"{\"summary\": \"The paper proposes the method of distillation of the large language model to the smaller one for efficient solving of Multiple Choice Question Answering task, via data generation and distillation loss. Two methods of data generation are considered: generate the whole question-answer structure with answer options in json format (via 5-shot prompting); or generate question-answer pairs obtaining each option separately. Then, the smaller model is trained on the generated data by distillation loss, learning to predict the larger model's probability of the generated options. The evaluation is done on MMLU benchmark. For the ablation study, ARC dataset is used\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is clearly written, and presents a practical method of LLM distillation to a smaller encoder-only model\", \"A nice ablation study is provided. There are several interesting observations, e.g. increasing the performance with the distill loss + temperature adjustment, or the usage of the format correctness as an implicit sign of the model's confidence.\"], \"weaknesses\": [\"The method is rather straightforward and does not contain a significant novelty, although the presented analysis is good\", \"The practical usefullness of the considered task is not so clear. Indeed, Multiple Choice Question Answering is the specific QA format convenient for LLM's evaluation, but the MCQA results are not necessarily directly connected to the general QA abilities of the model. For encoder-only LLMs, classification-based approach looks more appropriate (i.e. scoring the correctness of the QA pair)\", \"The model does not outperform Tasksource model which is obtained by the multi-task training of the same backbone: the improvement on MMLU is marginal (+0.5), and on ARC data the proposed approach works significantly worse.\"], \"questions\": \"Q1. Do you have any idea, if the huge improvement of the performance of DeBerta after distillation is related to improving of the model's question answering ability, or just due to learning of Multiple Choice QA format?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal Feedback Request for Reviewer dTam\", \"comment\": \"Dear Reviewers dTam\\n\\nA gentle reminder regarding the rebuttal for paper 3797. The rebuttal period is ending soon, and we haven't yet received feedback from you. Your insights are greatly appreciated. We would be grateful if you could take a look at your earliest convenience. Thank you.\"}",
"{\"title\": \"Response to Reviewer snni(Part 4)\", \"comment\": \"> Q4. Does this paper really aligns to the ICLR conference is questionable. Any other natural language processing conference seems more suitable.\\n\\nWhile our current work focuses on MCQA, the core contribution lies in our framework for leveraging LLMs for both data generation and representation distillation. This aligns directly with ICLR's focus on representation learning, as our method effectively transfers knowledge, and thus learned representations, from a large LLM to a smaller, more efficient encoder-only model.\\n\\nThis framework has broader applicability beyond MCQA. Within NLP, it could be applied to tasks like text classification, sequence tagging, or other tasks, where efficient few-shot learning is highly desirable. Furthermore, with the recent advancements in Vision-Language Models (VLLMs), our approach could be extended to vision tasks as well. For example, in Visual Question Answering, the VLLM could generate captions which are used to create images with an image generative model, and also use VLLM to produce the question and possible answer. Then, our method could distill this knowledge into a smaller, more efficient model for more efficient visual question answering. \\n\\nBy addressing the challenges of few-shot learning and knowledge transfer through representation distillation, our work contributes to the broader research areas of efficient learning and representation learning, which are central themes of ICLR\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their valuable feedback and suggestions.\"}",
"{\"title\": \"Response to authors' rebuttal\", \"comment\": \"Dear authors,\\nI highly appreciate your efforts and the additional experiments you performed.\\n\\nStill, I keep some of my concerns: (1) the novelty is limited; (2) the applicability is limited, the method underperforms standard training; (3) the improvement upon multi-task Tasksource model is not convincing.\\nI wish your paper to be published at some less competitive venue.\\n\\nThis time, I've decided to keep my scores.\"}",
"{\"summary\": \"The authors use a few task-specific multiple choice questions as seed examples to get a LLM to generate task-specific, synthetic multiple choice data. They explore two ways of prompting the LLM to generate this data. They train a small, encoder-only model via knowledge distillation using soft labels assigned by the LLM. They show that training on synthetic data via distillation is better than just training on a few non-synthetic task-specific data points directly, and also compare to some other models. The authors also conduct ablation studies regarding the amount of synthetic data, synthetic data generation temperature, and choice of LLM.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(Originality) While synthetic data generation with LLMs and knowledge distillation into transformer based models are both widely used and studied, the authors consider the specific setting of MCQA and distilling a decoder-only model into an encoder-only model, which is a new setting.\\n\\n(Quality) The authors report results across several random seeds. They also do some nice ablation studies. The limitations section was also of high quality.\\n\\n(Clarity) The paper was generally clear and easy to follow.\\n\\n(Significance) As mentioned in originality, this paper explores a setting that is slightly different from past work.\", \"weaknesses\": [\"A couple typos (these didn\\u2019t affect my review at all, but just mentioning them)\", \"Line 69/70 missing a space\", \"Line 179 \\u201ca scalar values\\u201d -> \\u201cscalar values\\u201d\", \"When generating synthetic data, how can you be sure you\\u2019re not generating questions that are in the MMLU/ARC test sets (or that are quite close?). It would be nice to see something like nearest neighbors of generated questions, or something like overlap of answer options with answer option sets from the test sets.\", \"A note on the tasksource+decompose/JSON is I don\\u2019t think it can necessarily be concluded that tasksource+JSON is better than tasksource as 0.5 is quite a narrow margin.\", \"In my mind the main weakness of this paper would be lack of significance.\", \"In practice, in the resource constrained setting there are already compelling alternatives to the approach described in this paper. For example, tasksource has the same amount of parameters, comparable performance, and faster inference as it only needs one forward pass. It also doesn't require synthetic data generation for each task. Furthermore, performance is less good than that of e.g., Gemma-2-2b-it and similar models which can be run quite cheaply on even a laptop (especially after quantization). I don\\u2019t see when \\u201cdistillation into DeBERTa\\u201d would be used in practice because there are already very compelling alternatives. I'd be happy to hear the authors' take on this, though.\", \"A paper definitely doesn't need to be the best \\\"in practice\\\" option to be useful, as it might provide surprising/intuitive insights compared to previous work. However, I don\\u2019t find the results of this paper particularly surprising in light of past work. Knowledge distillation is already widely used with LMs, and distillation from larger LLMs to smaller LLMs is done all the time with good results. Synthetic data generation with LLMs is also frequently done, and has been shown to work well. That more synthetic data works better is as expected.\", \"*Strengths & Weaknesses tl;dr*: I think the authors\\u2019 study is well thought out and put together, and mostly easy to follow. However, I don\\u2019t think it provides substantial insight/methods beyond what already seems to be common knowledge in the research community. I\\u2019ve assigned a rating of 3, but I\\u2019d choose 4 if it were an option because I think the paper is overall well made but just doesn\\u2019t have the level of impact I typically associate with ICLR papers.\"], \"questions\": [\"On line 245, why is it \\u201capproximately 4000 MCQA examples\\u201d? Shouldn\\u2019t this be exact?\", \"Why was number of negative examples set to 5 when MMLU and ARC only have 3?\", \"What temperature was used for the MMLU experiments?\", \"In Section 3.2, how is a sequence is being transformed into a scalar \\u2014 [CLS] token? Pooling?\", \"From Section 3.2 it is my understanding that handling a MCQ with n options requires n forward passes. Is that correct?\", \"How was inference done for the baseline models?\", \"When JSON samples are not properly formatted, are they resampled, or are less than 1024 samples used?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer U1vS(Part 1)\", \"comment\": \"We deeply appreciate the reviewer's thorough assessment of our work and their constructive suggestions for improvement\\n\\n> W1. The method is rather straightforward and does not contain a significant novelty, although the presented analysis is good\\n\\nWe appreciate the reviewer's observation regarding the individual components of our method. While techniques like data augmentation, knowledge distillation, and MCQA with encoder-only models have been explored individually, our work introduces a novel combination of these specifically for few-shot MCQA. To our knowledge, this is the first study to systematically investigate the effects of combining LLM-driven data generation and probability score-based distillation for enhancing encoder-only models in this challenging setting. Our experiments demonstrate that this combined approach unlocks significant performance gains compared to using either technique in isolation, highlighting the novelty and practical value of our contribution. \\n\\n> W2. The practical usefullness of the considered task is not so clear. Indeed, Multiple Choice Question Answering is the specific QA format convenient for LLM's evaluation, but the MCQA results are not necessarily directly connected to the general QA abilities of the model. For encoder-only LLMs, classification-based approach looks more appropriate (i.e. scoring the correctness of the QA pair)\\n\\nWe acknowledge that MCQA performance doesn't perfectly correlate with general QA abilities. However, we believe our framework of LLM-driven data generation and distillation is adaptable to other QA tasks and can serve as a valuable component in broader QA systems. To demonstrate this, we adapted our method to a binary classification approach for judging the correctness of question-answer pairs, using the generated data and training the encoder model with a sigmoid activation and binary cross-entropy loss. We also explored a heuristic approach where we retain the MCQA training procedure but use a constant threshold derived from the average log probabilities of all answers in the generated data to determine answer correctness during evaluation.\\n\\n| Method | ARC-Easy F1 |ARC-Challenge F1|\\n|---------------------------|----------------|----------------|\\n| 1024 real data binary |56.81 ± 1.47 |40.25 ± 4.08 |\\n| 5 real data binary |27.01 ± 10.09|14.23 ± 9.64 |\\n| 1024 JSON binary |48.86 ± 1.42 |32.20 ± 6.93 |\\n| 1024 JSON MCQA heuristic |49.50 ± 1.35 |42.38 ± 0.54 |\\n\\nAs shown in the table, both approaches significantly improve upon the few-shot baseline. Specifically, our method with the heuristic achieves an F1 score of 49.50 \\u00b1 1.35 on ARC-Easy and 42.38 \\u00b1 0.54 on ARC-Challenge. Interestingly, this heuristic-based approach outperforms direct binary classification (48.86 \\u00b1 1.42 and 32.20 \\u00b1 6.93 F1 on ARC-Easy and ARC-Challenge, respectively). We hypothesize that this is because the heuristic, by leveraging the full probability distribution from the MCQA training, better captures the model's confidence in its predictions compared to a simple binary classification approach.\\n\\nFurthermore, our framework can be applied to more general QA tasks, such as scoring candidate answers retrieved from a knowledge base. In this scenario, the LLM could generate question-answer pairs, and our method could train an efficient encoder-only model to score the plausibility of retrieved answers. This approach offers advantages in terms of efficiency and scalability compared to using the LLM directly for scoring, particularly when dealing with a large number of candidate answers.\"}",
"{\"title\": \"Rebuttal Feedback Request for Reviewer U1vS\", \"comment\": \"Dear Reviewers U1vS\\n\\nA gentle reminder regarding the rebuttal for paper 3797. The rebuttal period is ending soon, and we haven't yet received feedback from you. Your insights are greatly appreciated. We would be grateful if you could take a look at your earliest convenience. Thank you.\"}",
"{\"title\": \"Rebuttal Feedback Request for Reviewer mtLv\", \"comment\": \"Dear Reviewers mtLv\\n\\nA gentle reminder regarding the rebuttal for paper 3797. The rebuttal period is ending soon, and we haven't yet received feedback from you. Your insights are greatly appreciated. We would be grateful if you could take a look at your earliest convenience. Thank you.\"}",
"{\"title\": \"Response to Reviewer dTam(Part 2)\", \"comment\": \"> W4.a. In practice, in the resource constrained setting there are already compelling alternatives to the approach described in this paper. For example, tasksource has the same amount of parameters, comparable performance, and faster inference as it only needs one forward pass. It also doesn't require synthetic data generation for each task. Furthermore, performance is less good than that of e.g., Gemma-2-2b-it and similar models which can be run quite cheaply on even a laptop (especially after quantization). I don\\u2019t see when \\u201cdistillation into DeBERTa\\u201d would be used in practice because there are already very compelling alternatives. I'd be happy to hear the authors' take on this, though.\\n\\nThe reviewer mentions Tasksource requiring only one forward pass. However, the zero-shot classification pipeline used for Tasksource also requires multiple forward passes (one for each choice), just like our method. This can be seen in the transformers library source code of the class ZeroShotClassificationPipeline(ChunkPipeline), which mention \\\"Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis\\\".\\n\\nWhile Tasksource performs well overall, it can struggle in specific domains. For instance, on the international_law dataset in MMLU, our method achieves 65.62 accuracy, an improvement over Tasksource's 57.02. This suggests that our LLM-driven data generation and distillation approach can be particularly effective in domains where even extensively trained multi-task models may lack sufficient knowledge. \\n\\n\\n## Memory Usages During Inference (GB)\\nSequence Length\\t| DeBERTa-base\\t|LLaMA 1B | LLaMA 1B 4 bit |\\tGemma 2B | Gemma 2B 4 bit |\\n|---------------|---------------|-----------|-----------------|----------|----------------|\\n|128\\t\\t| 1.701 \\t| 3.576 | 2.211 \\t |\\t6.351\\t | 3.444\\t |\\n|256\\t\\t| 1.728 \\t| 3.773 | 2.421 \\t | 6.705 | 3.912\\t |\\n|512\\t\\t| 1.768 \\t| 4.134 | 2.794 \\t | 7.393\\t | 4.585\\t |\\n|1024\\t\\t| 2.060 \\t| 4.872 | 3.507 \\t | 8.792 | 5.971\\t |\\n|2048\\t\\t| 3.152 \\t| 6.235 | 4.870 \\t | 11.610 | 8.699\\t |\\n|4096\\t\\t| 6.600 \\t| 9.157 | 7.741 \\t | 17.050 | 14.207\\t |\\n\\n\\nRegarding larger LLMs like Gemma-2b-it, even with quantization, they have significant memory requirements, especially in few-shot scenarios. As shown in Table above, even a 4-bit quantized Gemma-2B requires substantial memory. This is further exacerbated by the longer sequence lengths inherent in 5-shot prompting, making these models less practical for resource-constrained settings. We also include the measurements for recently released LLaMa-3.2-1B-Instruct, which its 4 bit memory usage is comparable to our approach.\\n\\n\\n## Performance Comparison on MMLU\\n|Method \\t\\t\\t\\t| STEM | Social Science | Humanities | Other | Average |\\n|---------------------------------------|------|----------------|------------|-------|---------|\\n|LLaMa-3.2-1B-Instruct (5-shot)\\t\\t| 36.5 | 47.8\\t\\t| 46.3 | 45.1 | 43.1 |\\n|LLaMa 3.2 1B-Instruct 4-bit (5-shot) | 35.7 | 45.6\\t\\t| 42.2\\t | 40.6 | 40.3 |\\n|LLaMa 3.2 1B-Instruct 4-bit (0-shot)\\t| 29.4 | 33.7\\t\\t| 26.7\\t | 29.1 | 29.6 |\\n|DeBERTa-v3 + JSON distill (5-shot) | 32.5 | 43.2\\t\\t| 44.3\\t | 40.6 | 39.3 |\\n*0-shot refers to evaluating the model without any task-specific examples.\\n\\nWe also compared our method to the smaller LLaMa-3.2-1B-Instruct model on Table above. While the 5-shot 4-bit LLaMa model slightly outperforms our method on average, the performance difference is much smaller than with Gemma-2B. Importantly, our method significantly outperforms the 0-shot LLaMa model (which has a comparable sequence length to our method) \\n\\nIn summary, our method offers several advantages: (1) lower memory and compute requirements compared to larger LLMs, even with quantization, making it more practical for resource-constrained environments; (2) the ability to outperform strong baselines like Tasksource in specific domains by leveraging the knowledge of a large LLM for data generation and distillation; and (3) strong performance in few-shot settings, effectively addressing the challenges of limited labeled data.\"}",
"{\"title\": \"Response to Reviewer mtLv\", \"comment\": \"We deeply appreciate the reviewer's thorough assessment of our work and their constructive suggestions for improvement\\n\\n> W1. The method relies heavily on the availability of robust LLMs, which may not be readily accessible in languages other than English or for certain domain-specific tasks.\\n\\nWe acknowledge that the reliance on robust LLMs is a current limitation, particularly for languages other than English and specialized domains where high-performing LLMs may not be readily available. However, our framework itself is language- and domain-agnostic, meaning that it can be applied to any language or domain provided a suitable LLM is available.\\n\\nSeveral approach exist for mitigating this limitation, even with the current state of LLMs. While models like Llama 3.1 offer improved multilingual support, other multilingual LLMs such as SeaLLMs[1], or language-specific LLMs such as Cendol[2] could also be explored for their applicability to our method. This is an area we intend to investigate in future work.\\n\\nFor domain-specific tasks, fine-tuning existing LLMs on domain-specific data could improve their performance for data generation and distillation. For instance, we could fine-tune an LLM on a corpus of medical texts to generate higher-quality medical MCQA data. Exploring the effectiveness of domain adaptation for our framework is another important direction for future research.\\n\\n\\n[1] SeaLLMs - Large Language Models for Southeast Asia \\n\\n[2] Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages\\n\\n> W2. The decomposed generation method, while reducing parsing errors, often results in noisy data due to longer and less structured answers.\\n\\nWe acknowledge that the decomposed generation method can produce noisy data due to longer and less structured answers. We are exploring several strategies to mitigate this limitation in future work. One promising direction is to investigate more sophisticated prompting techniques. For example, we could incorporate more constraints directly into the prompts, specifying the desired length or format of the answers. Using iterative refinement is also promising, where we provide feedback to the LLM and ask it to revise its responses, thus improving generated data quality. Additionally, using more diverse and representative examples in the prompts might guide the LLM toward generating more appropriate answers. Exploring the effectiveness of such an approach for our framework is another important direction for future work.\"}",
"{\"summary\": \"This paper presents a novel approach to address few-shot multiple choice question answering (MCQA) by leveraging large language models (LLMs) for data generation and knowledge distillation into a smaller, efficient encoder-only model, DeBERTa-v3-base. The study addresses the computational challenges associated with using LLMs directly in real-world applications and provides a three-step framework involving synthetic data generation, LLM-based scoring, and distillation training. Experimental results demonstrate significant improvements in accuracy over baseline models on the Massive Multitask Language Understanding (MMLU) benchmark, as well as competitive performance compared to larger models like LLaMA-7B and Flan-T5-250M. The paper also includes ablation studies on various generation methods, scoring techniques, and hyperparameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The approach addresses a relevant problem in natural language processing, providing a practical solution for scenarios where computational resources are limited.\\n2. The framework is straightforward, and the two methods of data generation (JSON and decomposed) are described in detail, with thoughtful consideration of their benefits and limitations.\\n3. The paper presents extensive experiments, including performance comparisons, ablation studies, and evaluations on the MMLU benchmark.\", \"weaknesses\": \"1. The method relies heavily on the availability of robust LLMs, which may not be readily accessible in languages other than English or for certain domain-specific tasks.\\n2. The decomposed generation method, while reducing parsing errors, often results in noisy data due to longer and less structured answers.\", \"questions\": \"Please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer dTam(Part 1)\", \"comment\": \"We deeply appreciate the reviewer's thorough assessment of our work and their constructive suggestions for improvement\\n\\n\\n> W1. A couple typos (these didn\\u2019t affect my review at all, but just mentioning them)\\n\\nThank you for pointing out the typos. We will correct them in the revision. \\n\\n> W2. When generating synthetic data, how can you be sure you\\u2019re not generating questions that are in the MMLU/ARC test sets (or that are quite close?). It would be nice to see something like nearest neighbors of generated questions, or something like overlap of answer options with answer option sets from the test sets.\\n\\nTo address potential test set contamination, we analyzed the semantic similarity between the generated, training, and test set questions using the Sentence Transformers all-MiniLM-L6-v2 model. For each generated question, we calculated the maximum cosine similarity to all questions in the training and test sets. We then averaged these maximum similarities across all generated questions to obtain an overall measure of similarity.\\n\\n## ARC-Easy Averaged \\n| \\t \\t\\t | JSON Generated Data | Training Dataset | Test Dataset |\\n|---------------------------|---------------------|------------------|--------------|\\n| JSON Generated Data \\t | 1.000 \\t | 0.646 \\t | 0.590 \\t |\\n| Training Data \\t | 0.549 \\t | 1.000 \\t | 0.581\\t |\\n| Test Data \\t\\t | 0.548 \\t | 0.653 \\t | 1.000\\t |\\t\\n\\t\\n## ARC-Challenge Averaged\\t\\n| \\t \\t\\t | JSON Generated Data | Training Dataset | Test Dataset |\\n|---------------------------|---------------------|------------------|--------------|\\n| JSON Generated Data \\t | 1.000 \\t | 0.617 \\t | 0.539 \\t |\\n| Training Data \\t | 0.541 \\t | 1.000 \\t | 0.534\\t |\\n| Test Data \\t\\t | 0.533 \\t | 0.610 \\t | 1.000\\t |\\t\\n\\nThe similarity between the generated questions and the test set is comparable to the similarity between the training set and the test set. If the generated questions were simply copies from the training set, the similarity to the training set would be much higher (close to 1), and the similarity to the test set would likely also be higher. The observed comparable similarity scores suggest the generated questions are novel and not mere duplicates. To further identify any potential near duplicates, we also examined the maximum similarity scores between the generated questions and the training or test sets.\\n\\n## ARC-Easy Maximum Similarity \\n| \\t \\t\\t | JSON Generated Data | Training Dataset | Test Dataset |\\n|---------------------------|---------------------|------------------|--------------|\\n| JSON Generated Data \\t | 1.000 \\t | 0.923 \\t | 0.935 \\t |\\n| Training Data \\t | 0.923 \\t | 1.000 \\t | 1.000\\t |\\n| Test Data \\t\\t | 0.935 \\t | 1.000 \\t | 1.000\\t |\\t\\n\\t\\n## ARC-Challenge Maximum Similarity\\t\\n| \\t \\t\\t | JSON Generated Data | Training Dataset | Test Dataset |\\n|---------------------------|---------------------|------------------|--------------|\\n| JSON Generated Data \\t | 1.000 \\t | 0.945 \\t | 0.888 \\t |\\n| Training Data \\t | 0.945 \\t | 1.000 \\t | 0.997\\t |\\n| Test Data \\t\\t | 0.888 \\t | 0.997 \\t | 1.000\\t |\\t\\n\\n\\nAs shown in the 'Maximum similarity' tables, the maximum similarity between the generated data and the test sets is noticeably lower than the maximum similarity between the training and test sets. This further supports our claim that the generated data does not simply replicate the test set questions. The training dataset exhibits near-duplicate questions (similarity near 1), whereas our generated data does not exhibit such high similarity to the test set (around 0.93 and 0.88).\\n\\n> W3. A note on the tasksource+decompose/JSON is I don\\u2019t think it can necessarily be concluded that tasksource+JSON is better than tasksource as 0.5 is quite a narrow margin.\\n\\nWe acknowledge that the average improvement of 0.5 on MMLU is modest. However, it's important to consider that this average is across 57 diverse datasets. Our method leads to improved performance on 33 out of the 57 MMLU datasets, demonstrating its potential to enhance performance in specific areas. In particular, the improvements on the STEM subset (+1.6) and Social Science subset (+0.9) are more substantial. \\n\\nWhile our method doesn't improve performance on all datasets. We believe our approach is particularly valuable in few-shot scenarios, where even small improvements can be significant, especially in domains or tasks where existing multi-task models like Tasksource may lack sufficient training data. Our method allows us to leverage the knowledge of a large LLM to augment the training data and improve performance in these data-scarce situations.\"}",
"{\"title\": \"Response to Reviewer snni(Part 8)\", \"comment\": \"> Lastly, while memory efficiency is a strength, task-specific fine-tuning contrasts with the flexibility of LLMs. Although the authors demonstrate an example with binary classification to show adaptability to other tasks, the results still show performance limitations. Furthermore, there is no actual evidence of applicability to other task types, raising questions about its utility in real-world applications.\\n\\nThe proposed approach offers a practical solution for deploying MCQA models in resource-constrained environments where LLMs are impractical. We acknowledge that task-specific fine-tuning reduces flexibility compared to the zero-shot capabilities of large language models. However, this trade-off is often necessary and acceptable in real-world deployments where resources are limited. Our approach reduces the memory footprint compared to using an LLM directly. This allows for deployment on devices with limited resources or enables much faster processing by using larger batches during processing.\\n\\nOur experiments with binary classification demonstrate that the proposed framework can be adapted to other task formats. The significant improvement over the 5-shot baseline, particularly with the heuristic method, highlights this potential for broader applicability. \\n\\nBeyond binary classification, we plan to investigate the applicability of our framework to tasks like sequence-to-sequence, sequence tagging, etc. We anticipate that applying our framework to these tasks will require exploring more advanced distillation techniques, such as sequence-level distillation or distilling the attention directly, to effectively transfer the knowledge captured by LLMs. However, we believe the core principles of generating synthetic data and distilling LLM knowledge hold significant promise for improving performance and efficiency across these diverse NLP tasks.\"}",
"{\"comment\": \"Thank you for your detailed response and the additional analyses. These clarify key aspects such as memory efficiency and performance. However, I have several concerns that remain unaddressed.\\n\\nFirstly, while comparisons with lightweight LLMs in MCQA are valuable, the binary classification experiments in Section C.7 omit similar comparisons. Evaluating against LLMs like LLaMA, which are capable of adapting to diverse tasks without task-specific fine-tuning, would provide stronger evidence of the method's practical relevance.\\n\\nSecondly, Table 11\\u2019s title references \\u201c4-bit LLM,\\u201d yet no explicit LLM results are presented, creating confusion. Additionally, repetitive captions across Tables 10\\u201312 make it challenging to differentiate the experiments, reducing presentation clarity. Furthermore, it is difficult to identify exactly which parts of the revised paper have been updated, making it hard to assess how the authors have addressed prior feedback.\\n\\nThirdly, while the authors claim the core contribution lies in the data augmentation and distillation framework, its application has been demonstrated exclusively with DeBERTa. This raises doubts about generalizability. The focus on a single encoder-only model raises concerns that the approach, while positioned as a framework for LLM knowledge distillation, is overly narrow and lacks deeper analysis of encoder architectures. This limited scope suggests a relatively naive exploration of distillation, which does not fully address broader challenges or provide insights applicable to other encoder models.\\n\\nLastly, while memory efficiency is a strength, task-specific fine-tuning contrasts with the flexibility of LLMs. Although the authors demonstrate an example with binary classification to show adaptability to other tasks, the results still show performance limitations. Furthermore, there is no actual evidence of applicability to other task types, raising questions about its utility in real-world applications.\\n\\nIn summary, the work shows promise but is limited by its focus on DeBERTa, lack of broader comparisons, and presentation inconsistencies. I have adjusted the contribution score accordingly and would like to keep the remaining scores.\\n\\nThank you for your efforts.\"}",
"{\"title\": \"Response to Reviewer snni(Part 6)\", \"comment\": \"> Furthermore, I am concerned about the general applicability of your method to encoder-only architectures. You mentioned that RoBERTa performed poorly in your initial experiments, which raises the question of whether the benefits of your approach are specific to DeBERTa. Exploring and reporting results with a variety of encoder-only models would strengthen your claims about the method's effectiveness across different architectures. Without such exploration, it is difficult to conclude that the approach broadly benefits encoder-only models rather than being tailored to a specific model.\\n\\nThe reviewer raises a valid point about the general applicability to encoder-only architectures. In our initial experiments, we observed that RoBERTa performed poorly on the target task even when trained with a substantial amount of real data. This motivated our choice to focus on DeBERTa, which showed significantly stronger performance in this data-rich setting. Therefore, we believe that comparing our method's ability to improve few-shot learning using DeBERTa provides a more meaningful benchmark.\\n\\nWhile we agree that evaluating our approach with other encoder architectures would be beneficial, our primary focus in this work is on addressing the challenge of few-shot learning in resource-constrained scenarios. Our results demonstrate that our method of generating data and distilling LLMs is a promising approach to significantly improve performance when labeled data is scarce. The core contribution lies in this data augmentation and distillation framework, which is designed to be generally applicable. Exploring its effectiveness with other encoder models is a valuable direction for future work.\\n\\n\\n> Additionally, in your related work section, it appears that only (Sileo, 2024) in line 80 is mentioned regarding prior research on encoder-only models\\u2019 performance, without considering prior work on encoder-only model with MCQA. There may be other relevant studies such as (Ghosal, 2022) or (Siino, 2024) that could provide context for your work and help clarify its novelty.\\n\\nThank you for bringing these relevant works to our attention. We will incorporate (Ghosal, 2022) and (Siino, 2024) into our related work section in the next revision. While these papers provide valuable context for encoder-only models in MCQA, our work focuses on a different aspect: leveraging LLMs for data augmentation and knowledge distillation to improve few-shot performance. Our experiments on binary classification, a task similar to that explored in (Ghosal, 2022), demonstrate that our LLM-driven approach can significantly boost performance even beyond the MCQA format. This highlights the broader applicability of our data generation and distillation framework, which can complement and enhance existing techniques for training encoder-only models, such as those presented in the papers you mentioned.\"}",
"{\"title\": \"Response to Reviewer snni(Part 1)\", \"comment\": \"We deeply appreciate the reviewer's thorough assessment of our work and their constructive suggestions for improvement\\n\\n> W1. [Novelty is limited] : The paper\\u2019s novelty appears limited, as it does not introduce a new dataset and relies primarily on formatting prompts either in full JSON format or as segmented parts, raising questions on whether these methods constitute a genuinely novel approach. Furthermore, the distillation technique applied here does not seem particularly innovative, as it essentially reduces to a form of fine-tuning.\\n\\nWe acknowledge that LLM-based data generation and knowledge distillation are not novel in isolation. However, our work focuses on the effect of combining these techniques specifically for few-shot MCQA with encoder-only models, a setting that has received less attention. Most existing LLM distillation research targets smaller decoder-only models. Our approach, in contrast, distills into encoder-only architectures, presenting unique challenges in transferring generative capabilities to a discriminative model.\\n\\nFor this work, we opted for a simple distillation technique to establish a clear baseline and to facilitate a more straightforward analysis of the combined effects of data generation and distillation. We believe this provides a solid foundation for future research exploring more sophisticated distillation methods to further enhance performance. We agree that exploring such methods is a promising avenue for future work.\\n\\nRegarding dataset creation, while our primary focus wasn't on introducing a novel benchmark dataset, our findings on the JSON generation method coupled with LLM distillation offer a promising pathway towards generating high-quality MCQA data. The improved downstream performance observed when training on JSON-generated data with distillation strongly suggests that this method acts as an effective filter for higher-quality examples. This observation itself is a valuable contribution, paving the way for future research to build upon our approach and combine it with other techniques like filtering, post-processing, and retrieval-augmented generation to create novel benchmark datasets. \\n\\n> W2. [Using Encoder-only Models - limited Experimental setups] : Additionally, while the paper suggests the encoder-only model\\u2019s powerful capabilities, this claim is primarily based on improvements from distillation and model size reduction. These factors alone may not suffice to substantiate the model\\u2019s claimed \\\"power\\\" without more substantial baseline comparisons, particularly in tasks beyond fine-tuning.\\n\\n\\nWe appreciate the reviewer's point regarding the experimental setup. Our work aims to achieve strong few-shot MCQA performance and also improved efficiency compared to using large LLMs directly. While we highlight the potential of encoder-only models for efficient inference, our primary contribution lies in demonstrating how LLM-generated data and distillation can be effectively combined to achieve competitive accuracy in the few-shot setting. \\n\\nWe acknowledge that a more comprehensive analysis of the encoder-only model's capabilities across diverse tasks would strengthen the paper. While beyond the scope of this current work, which focuses specifically on few-shot MCQA, we plan to explore such evaluations in future research. Within the current scope, our experiments primarily demonstrate the effectiveness of our proposed method for improving few-shot performance efficiently by leveraging LLMs during training.\"}",
"{\"comment\": \"Thank you for your detailed response and for addressing my concerns.\\n\\nI appreciate your efforts in clarifying various aspects of your work. However, I still have some reservations that I would like to share.\\n\\nFirstly, I appreciate that you have provided performance comparisons under the same 5-shot settings for both your model and the LLaMa models. However, I remain concerned that the performance differences observed might still be influenced by factors such as sequence length and the number of shots included. For a more meaningful evaluation, it would be beneficial to provide analyses that control for these variables, perhaps by matching sequence lengths or providing statistical significance testing of the performance differences. This would help to more clearly demonstrate the effectiveness of your method independent of the benefits conferred by additional shots or shorter sequence lengths. Without such controlled comparisons, it is challenging to fully assess the advantages of your approach.\\n\\nSecondly, while you have demonstrated improvements in the MCQA task, it appears that your method is specifically tailored to this particular format and may not generalize well to other NLP tasks. The claims about broader applicability to tasks like text classification or sequence tagging seem speculative without supporting evidence. Providing empirical results on additional tasks would help substantiate these claims and demonstrate the generalizability of your approach. Without such evidence, it is challenging to assess the overall impact of your method beyond MCQA, and there is a concern that the applicability to other tasks might be overinterpreted.\\n\\nFurthermore, I am concerned about the general applicability of your method to encoder-only architectures. You mentioned that RoBERTa performed poorly in your initial experiments, which raises the question of whether the benefits of your approach are specific to DeBERTa. Exploring and reporting results with a variety of encoder-only models would strengthen your claims about the method's effectiveness across different architectures. Without such exploration, it is difficult to conclude that the approach broadly benefits encoder-only models rather than being tailored to a specific model.\\n\\nAdditionally, in your related work section, it appears that only (Sileo, 2024) in line 80 is mentioned regarding prior research on encoder-only models\\u2019 performance, without considering prior work on encoder-only model with MCQA. There may be other relevant studies such as (Ghosal, 2022) or (Siino, 2024) that could provide context for your work and help clarify its novelty.\\n\\nOverall, while your approach shows promise within the scope of MCQA using DeBERTa, the limitations in generalizability and concerns regarding experimental comparisons suggest that the contribution may be somewhat narrow.\\n\\nThank you again for your response.\"}",
"{\"title\": \"Response to Reviewer cyRj\", \"comment\": \"We deeply appreciate the reviewer's thorough assessment of our work and their constructive suggestions for improvement\\n\\n> W1. The method's performance improvement is limited and depends on the strength of the base model. While the gains are more pronounced with the weaker DeBERTa-base model, they are minimal with the stronger Tasksource model, and even slightly decreases in the case of Decompose.\\n\\nWe acknowledge this as the limitation of our current works. However, Our primary focus is improving few-shot MCQA performance. Tasksource, having been trained on a massive multi-task dataset (including MMLU), represents a strong baseline that may be difficult to improve upon significantly. Our method aims to maximize performance in the low-resource regime, where real data is limited. We demonstrate that even with limited initial examples, our method can extract valuable knowledge from the LLM and transfer it effectively to the smaller encoder-only model. \\n\\nWhile the average improvement on MMLU is modest, we observe more substantial gains on specific subsets, particularly in STEM (+1.6 with JSON distillation and +1.0 with decompose distillation). Moreover, fine-tuning Tasksource with our generated data and distillation leads to notable improvements on the ARC datasets, as shown in table below.\\n\\n| Method \\t\\t | ARC-Easy | ARC-Challenge \\t|\\n|---------------------------|----------|----------------|\\n| Tasksource \\t\\t | 72.8 | 51.2 \\t\\t|\\n| Tasksource + JSON distill | 74.5 | 54.7 \\t\\t|\\n\\nThese results highlight the potential of our method, especially when applied to domains or tasks where even a strong multi-task model like Tasksource may benefit from additional, targeted data.\\n\\n> W2. Additionally, when using DeBERTa-base, the best performance (JSON distill) achieved by using only the constructed dataset does not surpass that of a multi-task fine-tuned model (Tasksource).\\n\\nWe also acknowledge this as the limitation of our current works. However, a direct comparison is not entirely fair. Tasksource is trained on hundreds of datasets covering a broad spectrum of NLP tasks, while our method leverages only a small number of initial examples and an LLM. This significant difference in training data translates to a substantial difference in computational cost: Tasksource requires several days of training on powerful hardware, whereas our method completes training in approximately 5 minutes on a single GPU.\\n\\nOur approach is explicitly designed for few-shot learning, aiming to maximize performance when labeled data is scarce. While Tasksource is a strong general-purpose model, its broad training doesn't guarantee superior performance on all tasks. For example, on the international_law dataset in MMLU, our method significantly outperforms Tasksource, achieving 65.62% accuracy compared to Tasksource's 57.02%. This difference may be due to lack of such training example in the Tasksource's training datasets. This highlights the value of our approach for tasks or domains where even a broadly trained model like Tasksource can benefit from targeted data augmentation and distillation, especially in few-shot scenarios.\\n\\n> Q1. The experiments lack a more detailed analysis of the two data generation methods (e.g., example-based analysis): Why do the two methods (JSON and Decompose) lead to different outcomes in performance? Why does JSON outperform Decompose?\", \"the_two_methods_present_a_trade_off\": \"JSON offers higher quality but lower efficiency due to parsing, while Decompose offers higher efficiency but potentially noisier data. The JSON format acts as a filter, discarding instances with invalid JSON and structural errors. This results in a smaller but cleaner dataset. In contrast, the Decompose method, while avoiding parsing, is more prone to generating noisy examples, such as overly long or list-like answers that are less typical of real-world MCQA data. On average, the Decompose method produces sequences that's significantly longer than the real dataset, while JSON-generated sequences length are more similar to that of real datasets, which we shows in detail on Table 11 and 12 in the appendix C. Moreover, on Table 20 in the appendix provides a concrete example of this noise, showing the Decompose method generating an excessively long, list-like answer. Despite this potential for noise, the Decompose method often performs well, especially when combined with distillation, which likely mitigates the impact of these noisy examples.\"}",
"{\"comment\": \"Thank you for your thoughtful and dedicated participation in the discussion. To conclude the feedback comprehensively,\", \"application_of_frameworks\": \"As highlighted in the last response, it would be highly beneficial to apply the proposed frameworks in practical scenarios and include the results. Theoretical discussion, while valuable, gains substantial credibility when complemented by empirical evidence, and including practical outcomes would further substantiate the findings.\", \"electra_and_mmlu_evaluation\": \"While additional experiments using ELECTRA were noted, results on MMLU's 5-shot MCQA task\\u2014which the original paper emphasizes\\u2014are essential. This would ensure alignment with the paper's core evaluation criteria and provide a more robust comparison.\", \"in_depth_understanding_of_encoder_architecture\": \"Beyond attributing performance issues to a model's inherent limitations, a deeper analysis of the encoder architecture is required. This entails exploring its structural intricacies and how they interact with task-specific demands, such as few-shot MCQA scenarios, while also examining how these architectural choices influence the effectiveness of knowledge distillation.\", \"knowledge_distillation_and_reasoning_analysis\": \"The feedback suggests moving beyond surface-level interpretations of knowledge distillation. A more nuanced exploration could clarify whether the process effectively transfers reasoning capabilities or remains overly simplistic. Here, deeper analysis could be effectively supported by advanced visualization techniques, and figures could benefit from further refinement.\", \"dataset_quality_and_novelty\": \"Dataset generation plays a pivotal role in such research. Reliance on conventional formats like JSON and segment-based prompting may lack novelty. Comprehensive validation of dataset quality, examining aspects such as question length, complexity, and diversity, is recommended to establish robustness.\", \"broader_contextualization_in_related_work\": \"The limited engagement with other encoder models and related research may leave the methodology\\u2019s claims less substantiated. Expanding the scope of the literature review and positioning the proposed approach within a broader research context would strengthen the argument.\\n\\nDespite these points, your effort and dedication to contributing to this discussion are deeply appreciated. Thank you again for your active participation.\"}",
"{\"title\": \"Response to Reviewer snni(Part 7)\", \"comment\": \"Thank you for your insightful feedback. We have made several revisions to the paper to address your concerns.\\n\\n\\n> Firstly, while comparisons with lightweight LLMs in MCQA are valuable, the binary classification experiments in Section C.7 omit similar comparisons. Evaluating against LLMs like LLaMA, which are capable of adapting to diverse tasks without task-specific fine-tuning, would provide stronger evidence of the method's practical relevance.\\n\\nWhile direct comparison with LLMs on binary classification would be informative, our current results already demonstrate that the proposed framework can adapt to different task formats and improve performance significantly over a naive few-shot baseline. This highlights the potential for broader applicability, even though direct LLM comparisons are left for future work due to time constraints.\\n\\n> Secondly, Table 11\\u2019s title references \\u201c4-bit LLM,\\u201d yet no explicit LLM results are presented, creating confusion. Additionally, repetitive captions across Tables 10\\u201312 make it challenging to differentiate the experiments, reducing presentation clarity. Furthermore, it is difficult to identify exactly which parts of the revised paper have been updated, making it hard to assess how the authors have addressed prior feedback.\\n\\nWe apologize for the mistakes in Tables 10-12 captions in the previous version. We will made the following changes:\", \"table_10\": \"\\\"Performance Comparison with Small and 4-bit LLMs\\\". This caption now clearly indicates the focus on smaller models and the use of quantization.\", \"table_11\": \"\\\"Memory Usage Comparison with Small and Quantized LLMs\\\". This title now accurately reflects the table's content.\", \"table_12\": \"\\\"Cross-Datasets Evaluation Comparison\\\". This revised title clearly indicates the results of this experiment, which is to assess the generalizability of our approach and whether performance gains are due to format learning or knowledge acquisition.\\n\\n> Thirdly, while the authors claim the core contribution lies in the data augmentation and distillation framework, its application has been demonstrated exclusively with DeBERTa. This raises doubts about generalizability. The focus on a single encoder-only model raises concerns that the approach, while positioned as a framework for LLM knowledge distillation, is overly narrow and lacks deeper analysis of encoder architectures. This limited scope suggests a relatively naive exploration of distillation, which does not fully address broader challenges or provide insights applicable to other encoder models.\\n\\nThe reviewer raises a valid concern about the generalizability of our framework beyond DeBERTa. While our initial focus was on DeBERTa due to its strong performance in MCQA and comparability with the Tasksource baseline, we agree that demonstrating applicability to other encoder architectures is crucial for establishing the framework's broader relevance. Therefore, we conducted additional experiments with ELECTRA, another prominent encoder-only model, on the ARC datasets.\\n\\nThe results, presented below, show that our framework consistently improves upon naive few-shot learning with ELECTRA, mirroring the gains observed with DeBERTa. This provides strong evidence that our framework's benefits are not specific to DeBERTa but extend to other encoder architectures.\\n\\n### ELECTRA Performance on ARC datasets\\n| Method | ARC-Easy F1 |ARC-Challenge F1|\\n|---------------------------|----------------|----------------|\\n| 5 real data |28.54 ± 1.35 |27.06 ± 1.22 |\\n| 1024 real data |59.81 ± 0.88 |39.95 ± 1.41 |\\n| 1024 JSON Generate |42.48 ± 1.26 |33.11 ± 0.81 |\\n| 1024 JSON Distill |54.22 ± 0.88 |35.89 ± 1.45 |\\n\\nWhile the absolute performance with ELECTRA is lower than with DeBERTa, this likely reflects differences in the models' inherent capabilities rather than a limitation of our framework. The crucial observation is the substantial improvement our framework provides over the 5-shot baseline in both cases. The difference in performance between DeBERTa and ELECTRA trained on 1024 real data also supports this. This indicates our method can significantly boost few-shot performance, regardless of the underlying encoder architecture.\"}",
"{\"title\": \"Response to Reviewer snni(Part 2)\", \"comment\": \"> W3. [Inadequate analysis of suggested method] : There is inadequate validation of the quality of the LLM-generated dataset, which raises further concerns about the reliability and applicability of the findings.\\n\\nTo assess the quality of the LLM-generated dataset, we analyzed the semantic similarity between the generated questions and the questions in the real training and test sets. We used the Sentence Transformers all-MiniLM-L6-v2 model to encode all questions into semantic vector representations. For each generated question, we calculated the maximum cosine similarity to all questions in the training and test sets. We then averaged these maximum similarity scores across all generated questions to obtain a measure of overall similarity.\\n\\n\\nThe results, presented below, show that the similarity between the generated questions and the test set is comparable to the similarity between the training set and the test set.\\n\\n### ARC-Easy\\t\\n| \\t \\t\\t | JSON Generated Data | Training Dataset | Test Dataset |\\n|---------------------------|---------------------|------------------|--------------|\\n| JSON Generated Data \\t | 1.000 \\t | 0.646 \\t | 0.590 \\t |\\n| Training Data \\t | 0.549 \\t | 1.000 \\t | 0.581\\t |\\n| Test Data \\t\\t | 0.548 \\t | 0.653 \\t | 1.000\\t |\\t\\n\\t\\n### ARC-Challenge\\t\\n| \\t \\t\\t | JSON Generated Data | Training Dataset | Test Dataset |\\n|---------------------------|---------------------|------------------|--------------|\\n| JSON Generated Data \\t | 1.000 \\t | 0.617 \\t | 0.539 \\t |\\n| Training Data \\t | 0.541 \\t | 1.000 \\t | 0.534\\t |\\n| Test Data \\t\\t | 0.533 \\t | 0.610 \\t | 1.000\\t |\\t\\n\\n\\nWe can see that the similarity of the question in the generated data and the testing set is similar to that of the similarity of training set to the testing set. This means that the generated data is semantically similar to that of real data, which we believe is one of indicator that the question generated by our method have good quality.\\n\\nIf the generated questions were merely duplicates from the training set, we would expect to see a much higher average maximum similarity between the generated data and the training set, and likely a higher similarity to the test set as well. The observed comparable similarity scores suggest that the generated questions are novel and not simply copied from the existing data. This indicates that the LLM is generating new, semantically similar questions, supporting the reliability of our findings.\\n\\nWe acknowledge that semantic similarity alone doesn't fully encompass question quality, as factors like relevance, difficulty, and answer choice quality also matter. However, this analysis provides evidence that the LLM-generated data is semantically similar to real-world MCQA data and not simply replicating the training or test sets.\"}",
"{\"title\": \"Rebuttal Feedback Request for Reviewer snni\", \"comment\": \"Dear Reviewers snni,\\n\\nFollowing your initial feedback (thank you!), we submitted a revised rebuttal for paper 3797. The rebuttal period is ending soon, and we haven't yet received feedback on this revised version from you. Your insights on this updated rebuttal are greatly appreciated. We would be grateful if you could take a look at your earliest convenience. Thank you.\"}",
"{\"title\": \"Response to Reviewer snni(Part 3)\", \"comment\": \"> Q1. Has there been any comparison with recently released lightweight models, such as those with a 1B parameter size?\\n\\n\\nTo compare our method with a lightweight LLM, we evaluated the recently released LLaMa-3.2-1B-Instruct model. We also analyzed the memory usage of both models during inference. We measured memory consumption using the vmlDeviceGetMemoryInfo function from pynvml. For this measurement, we fed each model a sequence of 128 - 4096 random tokens from the model's vocabulary and compare their memory usages.\\n\\n### Memory Usages During Inference (GB)\\nSequence Length\\t| DeBERTa-base\\t|LLaMA 1B | LLaMA 1B 4 bit |\\t\\n|---------------|---------------|-----------|-----------------|\\n|128\\t\\t| 1.701 \\t| 3.576 | 2.211 \\t |\\t\\n|256\\t\\t| 1.728 \\t| 3.773 | 2.421 \\t |\\n|512\\t\\t| 1.768 \\t| 4.134 | 2.794 \\t |\\n|1024\\t\\t| 2.060 \\t| 4.872 | 3.507 \\t |\\n|2048\\t\\t| 3.152 \\t| 6.235 | 4.870 \\t |\\n|4096\\t\\t| 6.600 \\t| 9.157 | 7.741 \\t |\\n\\n### Performance Comparison on MMLU\\n|Method \\t\\t\\t\\t| STEM | Social Science | Humanities | Other | Average |\\n|---------------------------------------|------|----------------|------------|-------|---------|\\n|LLaMa-3.2-1B-Instruct (5-shot)\\t\\t| 36.5 | 47.8\\t\\t| 46.3 | 45.1 | 43.1 |\\n|LLaMa 3.2 1B-Instruct 4-bit (5-shot) | 35.7 | 45.6\\t\\t| 42.2\\t | 40.6 | 40.3 |\\n|LLaMa 3.2 1B-Instruct 4-bit (0-shot)\\t| 29.4 | 33.7\\t\\t| 26.7\\t | 29.1 | 29.6 |\\n|DeBERTa-v3 + JSON distill (5-shot) | 32.5 | 43.2\\t\\t| 44.3\\t | 40.6 | 39.3 |\\n*0-shot refers to evaluating the model without any task-specific examples.\\n\\nOur method achieves performance comparable to the 4-bit quantized LLaMa-3.2-1B model on the MMLU benchmark, despite having significantly fewer parameters and using substantially less memory, especially for longer sequences. This memory advantage is particularly important in few-shot scenarios, as the inclusion of 5-shot examples significantly increases the input sequence length for LLMs, further exacerbating their memory requirements and computational cost. Furthermore, our approach significantly outperforms the 0-shot LLaMa 1B model. The reduced memory footprint and lower computational requirements of our method, especially in practical few-shot settings, make it more suitable for deployment on resource-constrained devices, offering a compelling advantage for real-world applications\\n\\n> Q2. Is there a specific reason why only the DeBERTa encoder model was tested?\\n\\nIn our initial experiments, we also evaluated RoBERTa, but it performed poorly on this task even when trained on the real dataset. We then chose to focus on DeBERTa-v3-base for two main reasons: (1) it demonstrated significantly better performance in our preliminary experiments, indicating its suitability for MCQA, and (2) it is the same architecture used for the Tasksource model, which serves as a strong baseline and allows for a more controlled comparison with our method. While we focused on DeBERTa for this study, we are open to exploring other encoder-only architectures in future work.\\n\\n> Q3. Was there a particular reason for employing few-shot learning in an encoder model instead of using a masked language model (MLM)?\\n\\nOur motivation for employing few-shot learning with an encoder model stems from the observation that traditional encoder models often require substantial amounts of labeled data to perform well. Our work aims to address this limitation by exploring how LLMs can enable effective few-shot learning for MCQA with encoder-only models.\\n\\nWe did not use a masked language model (MLM) because it is not directly suitable for the MCQA task. MLM focuses on predicting masked tokens within a sequence, whereas MCQA requires selecting the correct answer from a set of choices. These are fundamentally different tasks, and the MLM objective doesn't naturally align with the goal of MCQA. Few-shot learning, on the other hand, directly addresses the challenge of limited labeled data in MCQA by leveraging the knowledge embedded within large LLMs.\"}",
"{\"title\": \"Response to Reviewer snni(Part 5)\", \"comment\": \"We appreciate the reviewer's thoughtful feedback and are grateful for the opportunity to improve our work.\\n\\n> the performance differences observed might still be influenced by factors such as sequence length and the number of shots included. For a more meaningful evaluation, it would be beneficial to provide analyses that control for these variables, perhaps by matching sequence lengths or providing statistical significance testing of the performance differences. \\n\\nTo address the reviewer's concern about sequence length and few-shot effects, we conducted separate analyses of memory usage and performance. For the memory comparison, we fed sequences of random tokens ranging from 128 to 4096 to both DeBERTa-base and LLaMa-3.2-1B-Instruct (with and without 4-bit quantization). Using random tokens allowed us to isolate the effect of sequence length on memory, independent of content. Our results show that, for a given sequence length, DeBERTa-base consistently uses less memory than the LLaMA-3.2-1B-Instruct, even with 4-bit quantization, which is shown on Table Memory Usages During Inference (GB).\\n\\nIn evaluating performance, we compared our DeBERTa-based method to the LLaMa-3.2-1B-Instruct model (with 4-bit quantization), which exhibited the closest memory footprint to DeBERTa among the LLMs we tested. A key distinction is the input sequence length at inference: DeBERTa processes a single question and one choice at a time. In contrast, even without few-shot examples (0-shot), LLMs require longer sequences due to the instruction format, the instruction itself, and all answer choices. With few-shot prompting, this difference becomes even more pronounced. Despite this inherent LLM overhead, our 5-shot DeBERTa model achieved 39.3% accuracy on MMLU, comparable to the 40.3% accuracy of the 5-shot, 4-bit quantized LLaMa-3.2-1B-Instruct. To demonstrate DeBERTa's advantage in memory-constrained settings, we evaluated the 4-bit quantized LLaMa-3.2-1B-Instruct in a 0-shot setting, which reduces its input length but still results in longer sequences than DeBERTa. In this more comparable memory setting (relatively same sequence lengths), the 0-shot LLaMa achieved only 29.6% accuracy, showcasing the superior performance of DeBERTa under similar memory constraints.\\n\\n\\n> Secondly, while you have demonstrated improvements in the MCQA task, it appears that your method is specifically tailored to this particular format and may not generalize well to other NLP tasks. The claims about broader applicability to tasks like text classification or sequence tagging seem speculative without supporting evidence. Providing empirical results on additional tasks would help substantiate these claims and demonstrate the generalizability of your approach. Without such evidence, it is challenging to assess the overall impact of your method beyond MCQA, and there is a concern that the applicability to other tasks might be overinterpreted.\\n\\n\\nTo address the reviewer's concern about generalizability beyond MCQA, we conducted experiments on a binary classification task of judging question-answer pair correctness. Given a question and an answer, the model must classify the answer as correct or incorrect. We adapted our method in two ways: 1) training a binary classifier directly using sigmoid activation and binary cross-entropy loss on LLM-generated data, and 2) using a heuristic approach where we trained the model as in the MCQA setting but used a constant threshold derived from the average log probabilities of all answers in the generated data to determine answer correctness during evaluation.\", \"the_results_on_the_arc_datasets_are_presented_below\": \"## Additional Experiments on Binary Classification of question-answer pair correctness on ARC datasets\\n| Method | ARC-Easy F1 |ARC-Challenge F1|\\n|---------------------------|----------------|----------------|\\n| 1024 real data binary |56.81 ± 1.47 |40.25 ± 4.08 |\\n| 5 real data binary |27.01 ± 10.09|14.23 ± 9.64 |\\n| 1024 JSON binary |48.86 ± 1.42 |32.20 ± 6.93 |\\n| 1024 JSON MCQA heuristic |49.50 ± 1.35 |42.38 ± 0.54 |\\n\\nBoth adapted methods significantly improved upon the 5-shot baseline. Interestingly, the heuristic approach, which leverages the full probability distribution learned during MCQA training, outperformed the direct binary classification approach. We hypothesize that this is because the heuristic better captures the model's confidence in its predictions.\\n\\nWhile these results are encouraging and demonstrate the applicability of our method beyond the MCQA format, we acknowledge that further experiments on a broader range of NLP tasks are needed to fully establish its generalizability.\"}",
"{\"comment\": \"Thank you for your valuable and detailed feedback throughout the review process. We appreciate your thorough assessment and will carefully address all the points raised, including the application of the framework in practical scenarios, deeper analysis of the encoder architecture, knowledge distillation and reasoning analysis, dataset quality, and broader contextualization in related work, in our next revision.\"}",
"{\"summary\": \"This paper aims to enhance the performance of low-computation-cost, only-encode models for few-shot multiple-choice question answering (MCQA) tasks. It leverages large language models (LLMs) to generate a high-quality, task-specific MCQA dataset for training and introduces a training approach that applies distillation loss based on LLM-assigned scores. Experimental results demonstrate the effectiveness of the proposed method: LLM-driven data generation and knowledge distillation for few-shot MCQA.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The method is relatively simple and clearly explained.\\n2. The paper explores the effectiveness of using LLMs to construct data for MCQA tasks, and the proposed distillation loss training method shows notable performance improvements.\\n3. The paper conducted a relatively comprehensive ablation experiment.\", \"weaknesses\": \"1. The method's performance improvement is limited and depends on the strength of the base model. While the gains are more pronounced with the weaker DeBERTa-base model, they are minimal with the stronger Tasksource model, and even slightly decreases in the case of Decompose.\\n2. Additionally, when using DeBERTa-base, the best performance (JSON distill) achieved by using only the constructed dataset does not surpass that of a multi-task fine-tuned model (Tasksource).\", \"questions\": \"The experiments lack a more detailed analysis of the two data generation methods (e.g., example-based analysis): Why do the two methods (JSON and Decompose) lead to different outcomes in performance? Why does JSON outperform Decompose?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1T6HzuZMCz | Interpretable Surrogate Models: A Clustering Approach for Gaussian Process Posteriors Using Mixed-Integer Quadratic Programming | [
"Yuta Shikuri"
] | Gaussian process regression is a flexible Bayesian method for capturing nonlinearity.
Although recent advancements allow us to handle various types of tasks by specifying a covariance function and a likelihood function, the interpretation of its predictions is sometimes challenging due to the large number of parameters.
In this study, we propose a clustering approach to improve the interpretability of Gaussian process posteriors.
Assuming that the parameters corresponding to data points within each cluster are identical, the number of parameters in the posterior distribution is reduced.
The assignment of data points to clusters is formulated as a mixed-integer quadratic programming problem, with the objective function being a weighted squared error from the mean of the posterior distribution approximated by variational inference.
Graph partitioning and decision tree learning can be represented by incorporating linear inequality constraints into this formulation.
Experimental results demonstrated that our approach provided significant advantages in enhancing the interpretability of spatial modeling.
Moreover, our formulation has produced higher-scoring decision trees compared to Classification and Regression Trees algorithm. | [
"Interpretability",
"Clustering",
"Gaussian Process Regression"
] | Reject | https://openreview.net/pdf?id=1T6HzuZMCz | https://openreview.net/forum?id=1T6HzuZMCz | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"o6yEbzug23",
"axHON6hCbi",
"aTGVoJs8gX",
"Zxc4GI2Upt",
"X2g0XjgdYW",
"DpanoWH2Bz",
"7IlHRDPuHJ",
"5aLK11occB",
"4iMsnQ6Xeo",
"3CPiG882TK"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"decision"
],
"note_created": [
1732660925307,
1730581200478,
1730778097569,
1732660892305,
1730675869403,
1732660913246,
1734460337960,
1732660874998,
1730388194317,
1737523434810
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1084/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1084/Reviewer_wRyA"
],
[
"ICLR.cc/2025/Conference/Submission1084/Reviewer_LWD9"
],
[
"ICLR.cc/2025/Conference/Submission1084/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1084/Reviewer_ADqV"
],
[
"ICLR.cc/2025/Conference/Submission1084/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1084/Area_Chair_DJPJ"
],
[
"ICLR.cc/2025/Conference/Submission1084/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1084/Reviewer_XALY"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"comment\": \"We deeply appreciate your insightful comments, which will greatly enhance the quality of our paper.\\nAlthough a major revision is needed, we will carefully address your feedback in our next submission.\\nThank you once again for your time and effort in reviewing our paper.\"}",
"{\"summary\": \"This paper introduces a clustering approach to improve the interpretability of Gaussian Process (GP) regression. By assuming that parameters within each cluster are identical, it reduces the number of parameters in the GP posterior, making the predictions easier to interpret. The clustering is formulated as a mixed-integer quadratic programming problem, with a weighted squared error objective based on the posterior mean approximated by variational inference. The approach also incorporates graph partitioning and decision tree learning through linear constraints.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The idea of combining clustering and GPR seems interesting.\", \"weaknesses\": \"1. My major concern is about the presentation. Some sections are hard to follow. For example, the first paragraph in the introduction, the relationship between sentences are not clear to me, it's more like a stack of facts.\\n\\n2. The main goal & contribution is not very clear to me. According to the abstract, the goal is to improve the interpretability of GPR. However, in 5.2, only prediction accuracy was discussed, but the interpretability was completely overlooked. \\n\\n3. I've been confused by \\\"parameters\\\". What do the authors mean in terms of parameters in the GPR setup?\\n\\n4. The empirical study can be improved. For example, the clustering results are only compared with k-means, but there are quite a few existing spatial clustering methods that are sometimes better than k-means. Similarly, for 5.2, only CART was considered.\", \"questions\": \"1. Can the authors explain what does \\\"parameter\\\" mean?\\n\\n2. Can the authors summarize and highlight the main goal and contribution of this manuscript?\\n\\n3. More comprehensive experiments are expected.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes using a mixed-integer quadratic programming algorithm to cluster regression coefficients in Gaussian process regression. The approach is further extended to applications in graph partitioning and decision tree growth. While the proposed algorithm is interesting and potentially valuable, I find the paper difficult to read, as it attempts to cover numerous loosely connected topics.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Using mixed integer programing as an alternative approach to iterative clustering algorithm such as K-means is indeed an interesting idea and I find the formulation in (5)-(6) quite clever.\", \"weaknesses\": \"The title of the paper is \\\"...for Gaussian process posteriors,\\\" yet the authors delve into topics like graph partitioning and decision tree growth, which don\\u2019t seem directly related to Gaussian process regression. This shift from the main theme feels distracting and makes the paper harder to follow. I would have preferred a more focused exploration of Gaussian process regression rather than these loosely connected topics. I also feel that the advantage of the proposed algorithm is not well justified.\", \"questions\": \"1. In my opinion, the may contribution of the paper should be highlighted as the advantage of using mixed-integer optimization over the iterative clustering algorithm. Indeed, as the authors pointed out, on lines 227-229, the weakness of the iterative clustering algorithm is that it can be trapped in local optimizers. Therefore, the authors should demonstrate why using mixed-integer optimization can overcome such a draw back, either through convergence analysis or extensive simulation studies. But I do not see any of such analysis in the paper, which is a bit disappointing.\\n2. The authors claim that by grouping the parameters, one can improve the interpretability of the Gaussian process regression coefficients. I fails to see why this is the case. For Gaussian process regression, it the the estimated functions or surfaces that matter most, not the regression coefficients. Please elaborate on the claimed \\\"interpretability\\\". In fact, grouping the coefficients, there is a chance of over-smooth the estimated functions or surfaces if the number of clusters are small. At least some simulation studies should be carried out to investigate these issues.\\n3. How does the K-means algorithm work in Figure~5? It looks like it is just clustering the spatial locations? Please elaborate.\\n4. For the decision tree algorithm, it is well-know that a single decision tree is not stable and sub-optimal in capturing the non-linear regression relationship. Ensemble methods such as random forest and boosting are much better. Could the proposed algorithm scale up to these method computationally?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We deeply appreciate your insightful comments, which will greatly enhance the quality of our paper.\\nAlthough a major revision is needed, we will carefully address your feedback in our next submission.\\nThank you once again for your time and effort in reviewing our paper.\"}",
"{\"summary\": \"The manuscript proposes computational methods to enhance the interpretability of the Gaussian process (GP) posterior. The methods are based on clustering of the GP posterior mean values, where a single parameter is used to approximate the posterior mean values of the data points in the same cluster and this approximation is formulated as the minimization of the weighted (the weights are derived from the posterior covariance matrix) squared loss using mixed integer quadratic programming (MIQP). The manuscript shows that two surrogate models, graph partitioning and decision tree, can be implemented in the MIQP formulation with additional linear inequality constraints.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Although the proposed methods have clear disadvantages (weaknesses), the MIQP formulations for graph partitioning and decision tree learning look novel.\", \"weaknesses\": \"1. The manuscript seems to have failed to provide attractive applications of the proposed methods to real-world problems. The experiments include only small datasets (even they look like toy datasets). The designed experiments included in the manuscript do not show the significance of the proposed methods.\\n\\n2. The proposed methods seem to suffer from high computational requirements. It seems that the proposed methods could not handle these small data sets. The manuscript does not provide any computational analysis of the proposed methods. As a result, it is difficult to understand how much computational resources the proposed methods require to solve the given problems.\", \"questions\": \"1.\\tI wonder why the authors chose to approximate the GP posterior distribution rather than the GP predictive distribution. In fact, the proposed methods are designed to approximate the GP posterior mean function values with the GP posterior covariance matrix, not the entire GP posterior distribution (i.e., a single (mode) function in the entire function space). In addition, the GP posterior mean and covariance are already approximated ones since the sparse approximation was used instead of the full GP.\\n\\n2. Please provide some details for the results reported in Table 1. \\na. How was the number of inducing points, m, chosen for the data sets? \\nb. How was the decision tree (CART) trained? \\nc. It is unclear why model accuracy can be measured by evaluating the values of the loss function (from the MIQP formulation?). In addition, it is unclear whether it is fair to compare the loss function of the two methods, since the proposed methods would provide a solution that minimizes this loss function by design.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We deeply appreciate your insightful comments, which will greatly enhance the quality of our paper.\\nAlthough a major revision is needed, we will carefully address your feedback in our next submission.\\nThank you once again for your time and effort in reviewing our paper.\"}",
"{\"metareview\": [\"Although the paper contains some interesting general ideas about GP regression and its interpretability, there are currently way too many open questions and concerns, such as:\", \"unclear potential advantage of mixed-integer programming over \\\"classical\\\" iterative clustering: why is is any better on a conceptual level\", \"unclear precise meaning of the proposed parameter grouping step on overall interpretability of GP regression.\", \"unclear experiments: why focus exclusively on prediction accuracy, if interpretability seems to be the main motivation?\", \"unclear advantage of ensemble methods like random forests.\", \"Therefore, I recommend rejection of this paper\"], \"additional_comments_on_reviewer_discussion\": \"None of the potential weaknesses of the paper could be addressed in a clear way by the authors in their rebuttal.\"}",
"{\"comment\": \"We deeply appreciate your insightful comments, which will greatly enhance the quality of our paper.\\nAlthough a major revision is needed, we will carefully address your feedback in our next submission.\\nThank you once again for your time and effort in reviewing our paper.\"}",
"{\"summary\": \"This paper explores enhancing interpretability in Gaussian Process (GP) regression by developing a surrogate model that leverages clustering, graph partitioning, and decision trees. By employing this clustering approach, the authors aim to group predictions into more interpretable segments. The model formulation as a mixed-integer quadratic programming (MIQP) problem optimizes a weighted squared error between the predicted values and the mean of the posterior GP distribution, which is approximated using variational inference. This approach seeks to balance the model\\u2019s interpretability with prediction accuracy, providing a structured methodology for creating interpretable surrogates in complex GP regression tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is well-constructed, with an almost solid contribution to the field. The writing quality is high, and the problem formulation and related work are well-explained, providing a clear foundation for understanding the approach. The MIQP formulation is particularly noteworthy and offers a promising avenue for further development.\", \"weaknesses\": [\"Initial Definition of the Problem: The problem definition begins with a discussion on the interpretability of Gaussian Processes (GPs), where the first concern arises. GPs are often more interpretable than many machine learning models, particularly due to their probabilistic structure and flexible kernel choices that accommodate domain-specific assumptions. However, as complexity increases (e.g., with higher-dimensional data, and complex kernels), interpretability tends to diminish. The paper could improve its problem definition by clearly specifying which aspect of GP interpretability it seeks to address. For instance, does it aim to handle interpretability in high-dimensional data, manage the interpretability of GPs with complex kernel structures, or focus on non-stationary models? By explicitly defining these directions, the study could clarify the scope and impact of its contributions, helping readers better understand the specific interpretability challenges it addresses.\", \"Interpretation of GPs with a Large Number of Parameters: A large number of parameters poses challenges in the context of complex kernels and high-dimensional datasets. However, the estimation of these parameters occurs during the training phase, and this paper does not address that step. Consequently, the parameters cannot be altered or modified to enhance interpretability. While the paper identifies the parameters as a source of the interpretability problem, it does not offer any solutions to this issue.\", \"Novelty and Consistency: If clustering is performed before training, the problem is transformed into a distributed Gaussian process. Predicting new test data points within one or multiple clusters has been explored previously in this field. However, conducting clustering after training and solely on test points is somewhat confusing. In practical scenarios, we do not receive all new points simultaneously; instead, data is entered gradually. How can we perform partitioning under these circumstances?\", \"Complexity and Computational Cost: Interpretability becomes an issue in Gaussian processes (GPs) as complexity increases. GPs are generally expensive prediction models. However, the authors have integrated this complex method with mixed-integer quadratic programming (MIQP), which is computationally intensive due to its combinatorial nature and the challenges posed by non-linearities in the objective function. It does not appear that this approach can be practically applied, even if it potentially improves interpretability.\", \"Inadequate Numerical Experiments: The numerical analysis presented in the paper fails to substantiate the main claims. Simply outperforming conventional K-means clustering does not support the key assertions made. Other baselines could have been employed for comparison in the experiments, but they were not utilized in this paper.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}"
]
} |
1SYUKPeM12 | Aligned Better, Listen Better for Audio-Visual Large Language Models | [
"Yuxin Guo",
"Shuailei Ma",
"Shijie Ma",
"Xiaoyi Bao",
"Chen-Wei Xie",
"Kecheng Zheng",
"Tingyu Weng",
"Siyang Sun",
"Yun Zheng",
"Wei Zou"
] | Audio is essential for multimodal video understanding. On the one hand, video inherently contains audio, which supplies complementary information to vision. Besides, video large language models (Video-LLMs) can encounter many audio-centric settings. However, existing Video-LLMs and Audio-Visual Large Language Models (AV-LLMs) exhibit deficiencies in exploiting audio information, leading to weak understanding and hallucinations. To solve the issues, we delve into the model architecture and dataset. (1) From the architectural perspective, we propose a fine-grained AV-LLM, namely Dolphin. The concurrent alignment of audio and visual modalities in both temporal and spatial dimensions ensures a comprehensive and accurate understanding of videos. Specifically, we devise an audio-visual multi-scale adapter for multi-scale information aggregation, which achieves spatial alignment. For temporal alignment, we propose audio-visual interleaved merging. (2) From the dataset perspective, we curate an audio-visual caption \& instruction-tuning dataset, called AVU. It comprises 5.2 million diverse, open-ended data tuples (video, audio, question, answer) and introduces a novel data partitioning strategy. Extensive experiments show our model not only achieves remarkable performance in audio-visual understanding, but also mitigates potential hallucinations. | [
"Audio-Visual Learning",
"Multimodal Large Language Models"
] | Accept (Poster) | https://openreview.net/pdf?id=1SYUKPeM12 | https://openreview.net/forum?id=1SYUKPeM12 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"oBSAHrxohb",
"h95h4QJ3jF",
"X42KRuHrxz",
"STHK26Shnv",
"PV5FmmQyTg",
"3eIpeo7To6",
"1RjI3wL8zH"
],
"note_type": [
"decision",
"official_review",
"meta_review",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1737523552907,
1730639495926,
1734635167869,
1730665706395,
1732639284874,
1730641953088,
1730008774992
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3083/Reviewer_gDzU"
],
[
"ICLR.cc/2025/Conference/Submission3083/Area_Chair_hDuV"
],
[
"ICLR.cc/2025/Conference/Submission3083/Reviewer_pdCN"
],
[
"ICLR.cc/2025/Conference/Submission3083/Reviewer_gDzU"
],
[
"ICLR.cc/2025/Conference/Submission3083/Reviewer_JTeq"
],
[
"ICLR.cc/2025/Conference/Submission3083/Reviewer_f89q"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"This paper discusses the importance of audio-visual large language models (AV-LLMs) in multimodal video understanding, with a particular emphasis on the use of audio information. The paper proposes a fine-grained AV-LLM model called Dolphin, which ensures comprehensive and accurate video understanding by aligning audio and video in both spatial and temporal dimensions. To better define the task, this work proposed a related dataset(AVU) and benchmark(AVU-Bench), that contains 5.2 million diverse data pairs (video, audio, questions, answers), and a novel data partitioning strategy is introduced. Experimental results show that Dolphin performs well in audio-visual understanding and effectively reduce hallucinations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The method is soundness. The author put forward a fine grand alignment method, adding visual tokens audio, and special temporal Temporal tokens to achieve better alignment.\\nThe.\\n\\nThis paper put forward a comprehensive dataset with a promising data processing pipeline and obtained large-scale data.\\n\\nThe paper gives a benchmark based on the task definition and its dataset and compares the baseline methods.\\n\\nExtensive experiments demonstrate that Dolphin significantly improves audio-visual comprehension and is effective in reducing errors related to audio neglect.\", \"weaknesses\": \"1. The experiment is comprehensive but the baseline is weak. The method mentioned VideoLLAMA2, but the experiment seems only to compare the result with VideoLlaMA1. Adding more comparisons against these baselines would be more persuasive.\\n\\n2. The author mentioned that AVU could reduce the hallucination; while the related analysis is not included in the experiments. \\n\\n3. The meaning of \\u201cfine-grained spatial modeling\\u201d lack of definition. Please provide a clear definition or explanation of \\\"fine-grained spatial modeling\\\" in the context of their work.\\n\\n4. Although the author compares video and audio captions separately, more experiments on other audio-visual datasets are expected.\\nMany any-to-any models can have a visual-audio understanding ability. What is their performance on the given tasks?\", \"questions\": \"Please refer to the weakness.\\nOverall, I think this article is quite comprehensive, but in this era of a large number of LLM works, I think this work needs to be supplemented with more comparisons to prove that this work is novel enough to be published in ICLR.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper presents Dolphin, an audio-visual (AV) reasoning framework, that aligns audio and visual modalities in a fine-grained manner using transformer cross-attention at multiple spatial scales for each audio-visual frame. The proposed method uses a large language model to take the aligned AV features as tokens to produce reasoning responses. The paper further proposes a dataset: Audio-Visual Understanding (AVU), by putting together multiple AV benchmarks, and providing additional annotations using unimodal foundation models; the annotations are improved using expert models, LLMs based on consistency, and manual verification. Experiments are provided on audio-visual datasets over QA tasks and demonstrate promise against recent prior methods.\\n\\nThe paper received four mixed reviews, mainly inclined favorably. All the reviewers agree on the importance AV alignment in modern multimodal LLMs and support the AVU dataset the paper proposes. However, there are also concerns regarding the reliability (pdCN, gDzU, f89q) of annotations in the AVU dataset, especially given it is generated automatically using expert models. Reviewers also pointed out issues with regards to the technical contribution for multimodal alignment that has similarities to many prior works (JTeq, f89q), lack of experiments into various aspects of the model, dataset, and capabilities (JTeq, gDzU, f89q).\", \"additional_comments_on_reviewer_discussion\": \"The paper was discussed extensively between the reviewers and the authors, with the authors providing detailed point-by-point explanation of the concerns, as well as providing new experimental comparisons and results. Some of the key discussion points are summarized here.\\n\\n1. Hallucination in the expert models during dataset generation (pdCN, gDzU, f89q): To address this concern, authors provided performance of their model on video and audio hallucination benchmarks, where the results show minor improvements in avoiding video hallucination (against Video LLaMA2) and a significant improvement in mitigating audio hallucination.\\n\\n2. The novelty of the proposed architecture being straightforward or very similar to prior works (JTeq, f89q): The authors provide clarifications on the contributions. AC also observes the ablation studies provided in Table 5 speak out the importance of each component in the model. However, AC also thinks that better insights could have been provided to support the various design choices made in the architecture. \\n\\n3. Missing experiments, comparisons to state-of-the-art models, dataset annotation details, and model capabilities (JTeq, gDzU, f89q): Authors have provided many new results during the discussions, including results to VideoLLaMA2, Avicuna, Video-Salmon, PandaGPT, etc., showing improvements. One facet where the model struggles is perhaps speech recognition (as shown in the Table in response to Q2 of Reviewer JTeq).\\n\\nOverall, AC thinks the paper makes a good contribution from a dataset perspective that may be useful for training future MLLMs. While the technical contribution is weak, it appears to have some novel components that are empirically shown to lead to strong performance. Thus, AC recommends accept.\"}",
"{\"summary\": \"This paper introduces a new audio-visual LLM model called Dolphin and a new audio-visual dataset AVU. The authors discuss the existing problem with video LLMs, which is, how they often ignore the audio information present in the video and only attend to the visual information while understanding videos. The authors claim that the models do not learn any alignment between the audio and visual information in the video, which is the reason for this behavior of video LLMs. Hence the authors design the Dolphin model, which aligns the audio and visual information both spatially and temporally before feeding them to the LLM. Specifically, they use multi-scale vision transformers to extract visual features at different scales and apply cross-attention with audio features at each scale. These\\nfeatures are again merged with the global visual representation using another cross-attention. Then temporal cross-attention is applied between these features bi-directionally to obtain visual-contextualized audio tokens and audio-contextualized visual tokens. This is fed to the LLM for the downstream task.\\n\\nSince most existing video datasets focus mainly on visual content, the authors have introduced a new audio-visual dataset by using existing unimodal datasets and leveraging LLMs to generate modality-specific question-answer pairs. They generate different types of questions and answers based on metadata correspondence of the audio and visual inputs by prompting LLMs. The experiments are designed to test the new model architecture on existing video QA datasets and other unimodal tasks such as captioning and classification.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The problem addressed by the authors is an important one. Most video-related datasets and models indeed ignore the information present in the audio almost completely. Hence this work is an important one to fill this research gap.\\n\\n2. The proposed model architecture achieves better results on existing video QA datasets and the ablation studies show the importance of spatial and temporal alignment layers introduced in the architecture.\\n\\n3. The dataset is large-scale and can be significant to the community to advance audio-visual understanding. \\n\\n4. The usefulness of the dataset is shown by comparing video llama trained with and without the AVU dataset.\", \"weaknesses\": \"1. The entire pipeline in the dataset generation is LLM-based. There are no discussions about the efficiency of the pipeline, hallucination effects, or error propagation in the dataset creation process.\\n\\n2. The authors claim in a lot of places in the paper that there is a significant reduction in hallucinations using their model and dataset. They design an AVU-negatives subset to train the model to say no to some questions. However, the experiments are not designed to validate this claim in any manner. While Dolphin may outperform certain models, it is unclear whether the hallucination is reduced as there are no metrics or definitions to evaluate this. It is a tall claim without any experimental results to say that hallucinations are reduced. \\n\\n3. Minor comment: Clotho-V2 which was is used as a dataset for training is not referenced.\", \"questions\": \"1. What are the effects of using pre-trained models to create a pipeline for various captioning and QA creation steps? What if any of the models hallucinated? Was there some kind of quality check done?\\n\\n2. I am intrigued by some of the examples of the dataset that has absolute time information such as \\\"What time does the train whistle blow?\\\" and the model providing an answer. Do these models understand the concept of time and seconds?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Feed back to author\", \"comment\": \"Dear author,\\n\\nHere I gave a quick feedback first, since the review period extended, I expect more discussion w/o more experiments needed.\\nI appreciate the author's attitude toward adding the experiments. In general, I personally expect more explanations and descriptions of the given experiments.\\nHowever, I already raised the score to 5->6 since most of my concerns are well addressed. While the data scale and potential use still be a limitation from my perspective.\\n\\nBest\"}",
"{\"summary\": \"The authors propose an audio-visual LLM Dolphin, which consists of a multi-scale adapter for spatial alignment and an interleaved merging module for temporal alignment. A large-scale audio-visual caption&instruction-tuning dataset AVU is also proposed, including 5.2M video-audio-qa tuples. Training on the proposed dataset, the proposed method achieves state-of-the-art performance on several audio-visual, audio, and video benchmarks compared with existing audio-visual LLMs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The curation process of the AVU dataset looks sound and reasonable. The authors integrate several open-source and commercial LLMs into the data pipeline to generate high-quality audio-visual captions and divide the dataset into several parts based on audio-visual consistency. The community now is facing a shortage of a large-scale audio-visual instruction-tuning dataset. The proposed dataset, along with the data curation procedure, will help the following research in the related field.\\n\\n2. The results show the proposed method outperforms several previous audio-visual LLM on audio, video, and audio-visual benchmarks. Apart from caption and question-answering, it also excels in some closed and open-ended audio tasks, which makes the framework more applicable.\\n\\n3. The ablations are comprehensive. Each component is well-ablated and clearly verified. The authors also conduct numerical analysis on the impact of the proposed dataset.\", \"weaknesses\": \"1. The method is trivial and questionable. The entire framework consists of three parts: audio and visual encoders with injected multi-scale uni-modal and multi-modal adapters, a cross-modal attention block to perform temporal integration, and a Vicuna as the decoder. The audio-visual adapters and the cross-modal attention have been proposed and utilized in many previous works[1-3], and the pipeline of training an audio-visual LLM is also not novel. The data pipeline for generating audio-visual captions is also been utilized by several previous methods[4-5]. Besides, the description of the model architecture is vague, many details are missing and the rationale of some model designs is unclear. Please see the question part below in detail.\\n\\n2. Speech is neglected in the model architecture designs. Since the audio feature is semantic and high-level, while the speech feature is low-level and dense, it is a common way to model the audio and speech separately via different encoders, such as [4, 6]. Besides, how does the proposed model outperform baseline methods on the speech recognition task as shown in Table 3 when no speech encoder or dense feature is involved? What does the model perform when compared with some speech-centric models?\\n\\n3. The application scenarios are limited. It seems that the proposed method is only suitable for audio-visual correspondence videos since the training dataset is constructed by at least medium-level AV consistency videos, while the low-level AV consistency data is used for negative samples, yet 1). how to decide whether an in-the-wild video is suitable for the model to infer? and 2). what is the purpose of aligning audio and visual encoders using high AV consistency videos? I believe the alignment stage is more likely to align the audio and visual encoder with the text decoder rather than align the audio encoder with the visual encoder. What will happen if videos with low AV consistency are introduced for training?\\n\\n4. Audio-visual capabilities are not fully probed. Some audio-visual tasks are not tested, such as audio-visual caption, audio-visual speech recognition, and audio-visual sound source detection as the previous method [6] does. I suggest the authors conduct experiments on these benchmarks and compare the proposed method with [6] to show the model's capability more comprehensively.\", \"reference\": \"[1] Lin, Yan-Bo, et al. \\\"Vision transformers are parameter-efficient audio-visual learners.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. \\n[2] Tian, Yapeng, Dingzeyu Li, and Chenliang Xu. \\\"Unified multisensory perception: Weakly-supervised audio-visual video parsing.\\\" Computer Vision\\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\\u201328, 2020, Proceedings, Part III 16. Springer International Publishing, 2020. \\n[3] Li, Guangyao, et al. \\\"Learning to answer questions in dynamic audio-visual scenarios.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. \\n[4] Chen, Sihan, et al. \\\"Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset.\\\" Advances in Neural Information Processing Systems 36 (2023): 72842-72866. \\n[5] Wang, Yi, et al. \\\"Internvideo2: Scaling video foundation models for multimodal video understanding.\\\" arXiv preprint arXiv:2403.15377 (2024). \\n[6] Sun, Guangzhi, et al. \\\"video-SALMONN: Speech-enhanced audio-visual large language models.\\\" arXiv preprint arXiv:2406.15704 (2024).\", \"questions\": \"1. Considering the selected audio and visual encoders are far smaller than the LLM (ViT-L and AST), why not directly train these encoders to achieve better performance since the 7b/13b LLM is also involved in training in the instruction-tuning stage?\\n\\n2. Why select ViT-L, AST, and Vicuna as encoders and decoders when tons of more powerful alternatives are available (such as SigLIP, InternViT for image, Beats, Whisper encoder for audio, and Qwen, llama3, mistral for LLM)? Is there any ablation?\\n\\n3. Why not use some video encoders to perform visual encoding both for the Dolphin model and the data curation pipeline? Is there any ablation? \\n\\n4. For the temporal integration, how does the proposed bi-directional cross-attention block 'enhance the audio-visual information exploitation of AV-LLM' as the author claims? What I see is just an attention block to perform cross-modal interaction for global features, yet how to model the temporal relationships, is positional encoding or RoPE being used? How to inject the so-called 'temporal integration information' into the dual-path framework? The descriptions are too vague and need to be improved. \\n\\n5. What is the connector between the audio/visual encoder and LLM decoder? Q-former or linear projection? Is there any ablation? \\n\\n6. How does the model tackle uni-modal tasks since the fine-grained alignment seems to be mandatory? For videos that missing the auditory part, will a modality mask perform on the input of the LLM decoder and the cross-modality integration module (both spatial and temporal)? For videos with semantic-irrelevant auditory parts, how does the model resist the potential negative information brought by the auditory modality? \\n\\n7. For the experiments, the authors only compare the proposed method with audio-visual LLMs, how much is the performance gap between the proposed AV-LLM with some uni-modal models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper investigates the capabilities of audio-visual large language models (AV-LLMs) to enhance their reasoning and understanding capabilities. As existing AV-LLMs tend to neglect audio information, this paper addresses the issue from two perspectives: model architecture and dataset. For model architecture, the authors enhance both spatial and temporal alignment of AV-LLMs by proposing an audio-visual multi-scale adapter for aggregating multi-scale information and proposing audio-visual interleaved merging, respectively. For the dataset, this paper proposes a large-scale caption & instructional dataset using existing audio-visual data sources. Experimental results show that the proposed model achieves favorable performance in both audio-visual understanding and audio understanding tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation of this work, which identifies the weaknesses of AV-LLMs and aims to solve them from two different perspectives, is a sound approach to advancing research in this area.\", \"The approach to enhancing spatial and temporal alignment in audio-visual LLMs is innovative.\", \"Constructing an audio-visual caption and instructional dataset is beneficial for researchers, as there is a lack of such datasets.\"], \"weaknesses\": [\"**The main text requires further refinement. It contains typos, broken sentences, and inconsistent tenses. The reviewer has identified only some of these issues:**\", \"L67: VideoLlama is mentioned twice.\", \"L158: \\\"clap\\\" should be checked for correctness.\", \"L231: \\\"detailed\\\" is misspelled as \\\"detrailed.\\\"\", \"L414 contains a broken sentence.\", \"The right figure in Figure 1 is not explained in the main text.\", \"There is a typo in the right figure of Figure 2, \\\"his audio.\\\"\", \"The text should use \\\"/citep\\\" for citations.\", \"**The reviewer is concerned about the reliability of the dataset. Since the paper proposes a large-scale dataset, it should include a more detailed explanation, such as dataset statistics. The reviewer points out some missing or problematic aspects that lessen the dataset's reliability:**\", \"The prompt templates for constructing the meta information are not provided. These prompts are crucial as they differentiate dataset types and help manage noise in this automatically generated dataset.\", \"In Figure 6, AVU-specific, although the questions differ, the answers are identical.\", \"In Figure 9, the question asks about the sound of a frog, yet the answer discusses an unrelated aspect of color, highlighting the dataset's noisiness.\", \"To address concerns about the dataset's reliability and its claim as a benchmark, human verification of the dataset is necessary. If the dataset is noisy, researchers might hesitate to use it for evaluating models.\", \"**The comparison experiments are not thoroughly conducted. Since the paper focuses on improving the audio-visual understanding of AV-LLMs, it should include comparisons with existing high-performing AV-LLMs. Here are several models that the paper should have considered:**\", \"FAVOR: https://arxiv.org/pdf/2310.05863\", \"video-Salmon: https://arxiv.org/pdf/2406.15704\", \"PandaGPT:https://arxiv.org/abs/2305.16355\", \"OneLLM: https://arxiv.org/pdf/2312.03700\", \"**The reliability of the model's design and training is questionable. The inconsistencies and errors in the paper amplify these concerns:**\", \"The notations in Figure 2 and the main text differ, making it hard to understand the model's mechanism.\", \"What does the superscript \\u201ci\\u201d stand for in all notations? And what is the difference from the superscript \\u201c1\\u201d in L178?\", \"In Figure 1, how does the Dolphin model recognize the words a man says using the ImageBind audio encoder? Doesn't the ImageBind audio encoder take environmental sound as an input, not speech?\", \"In L430, the authors mention that AST was used, but do not explain how they trained or integrated this model.\", \"Table 6 not explained in the main text.\"], \"questions\": [\"L32: Does the audio modality prove crucial for comprehensive understanding? Could you substantiate this claim?\", \"Does reason (3) on L72 contradict the starting paragraph of the introduction, where the authors assert that audio is crucial for video understanding? Could the authors provide examples of when audio is crucial versus when it may be less informative than visual data?\", \"In Table 2 and Table 3, did Dolphin use unimodal signal as an input, or use both of multimodal signal for unimodal task?\", \"L67: It appears that the model trained with audio converted to text performs favorably. How would the model perform with video + audio (converted to text)? Could this combination outperform the Dolphin model? Could the authors conduct this experiment?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
1STZCCI8mn | CNS-Bench: Benchmarking Model Robustness Under Continuous Nuisance Shifts | [
"Olaf Dünkel",
"Artur Jesslen",
"Jiahao Xie",
"Christian Theobalt",
"Christian Rupprecht",
"Adam Kortylewski"
] | One important challenge in evaluating the robustness of vision models is to control individual nuisance factors independently.
While some simple synthetic corruptions are commonly applied to existing models, they do not fully capture all realistic distribution shifts of real-world images. Moreover, existing generative robustness benchmarks only perform manipulations on individual nuisance shifts in one step.
We demonstrate the importance of gradual and continuous nuisance shifts, as they allow evaluating the sensitivity and failure points of vision models. In particular, we introduce CNS-Bench, a Continuous Nuisance Shift Benchmark for image classifier robustness. CNS-Bench allows generating a wide range of individual nuisance shifts in continuous severities by applying LoRA adapters to diffusion models. After accounting for unrealistic generated images through an improved filtering mechanism for such samples, we perform a comprehensive large-scale study to evaluate the robustness of classifiers under various nuisance shifts. Through carefully-designed comparisons and analyses, we find that model rankings can change for varying shifts and shift scales, which is not captured when averaging the performance over all severities. Additionally, evaluating the model performance on a continuous scale allows the identification of model failure points, providing a more nuanced understanding of model robustness. Overall, our work demonstrated the advantage of using generative models for benchmarking robustness across diverse and continuous real-world nuisance shifts in a controlled and scalable manner. | [
"Generative models",
"benchmarking",
"computer vision"
] | Reject | https://openreview.net/pdf?id=1STZCCI8mn | https://openreview.net/forum?id=1STZCCI8mn | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"w6Cm3Q5KGd",
"rt5b7AIFW5",
"rhpad6AI2Z",
"n7z4Jf4CvC",
"mw47cHUkBa",
"iBUMUTnB4u",
"hidMZL9CVy",
"eHoIG8cYJk",
"dv3Tm6Gerq",
"dcSiEPJS8T",
"Yp1nqWfgVs",
"YD2VSu7bJU",
"WRM9biisID",
"W2NpLxa3yd",
"VUuXuOq7n6",
"U5CJuDsA3M",
"RvJpaRPebZ",
"Oq8LgC8vbE",
"OheaoDJiKB",
"Oh24Cijc13",
"OGRvvO6B3V",
"OAcYXwsOg8",
"McIXby8HRf",
"JMJAWk4S6P",
"I7GYuHDCjX",
"HPQ4RotAdT",
"EzPGAsisSt",
"DXfSUQIBl3",
"74kNzWS6ay",
"53EgJ7uhH4"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732606247390,
1734818237773,
1733059172330,
1737524034729,
1730347007621,
1732637651122,
1732231244881,
1732230928166,
1732231066645,
1732499850324,
1732708177993,
1732708397728,
1730162549215,
1732231168128,
1732621373890,
1732231375538,
1733305707819,
1730633093389,
1732231563025,
1730642114960,
1733305795625,
1732622623039,
1730050935189,
1732230702395,
1732231685978,
1733305987369,
1733305887387,
1732565052545,
1732231338154,
1733306041455
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Area_Chair_26Tg"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_1iK7"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_LxGf"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_zdn7"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_LxGf"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_zdn7"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_1iK7"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_xh13"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_8WvD"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_xh13"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_1iK7"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Reviewer_zdn7"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10228/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Limitations added\", \"comment\": \"Thank you very much for your response.\\nWe agree that this point is, indeed, important to consider for people using our benchmark or researchers working on improving generative benchmarks.\\n\\nWe added the following at the end of the method sliders section (l.232 in the updated manuscript):\\n\\\"Since our sliders do not explicitly exclude confounding variables, the applied shifts may also affect confounders inherently present due to biases in the training data. For example, as shown in \\\\cref{fig:shift_dust}, using the \\\\textit{in dust} slider unintentionally removes half of the people, and in \\\\cref{fig:shift_video_game}, the background no longer represents a forest. Consequently, failures in our subsequent analysis cannot always be solely attributed to the nuisance concept itself.\\\"\\n\\nWe also added a section about the limitations in the conclusions (l.531):\\n\\\"While our approach allows for diverse continuous nuisance shifts, it does not eliminate all confounders, meaning failures cannot always be solely attributed to the targeted nuisance concept. This highlights an inherent challenge for generative benchmarking approaches, and future advances in generative models could help mitigate these confounding factors. Additionally, while we have carefully addressed this issue in our work, we acknowledge that using generated images can lead to biases arising from the real vs. synthetic distribution shift. \\n\\nWe hope this benchmark can encourage the community to continue working on more high-quality generative benchmarks and to adopt generated images as an additional source for systematically evaluating the robustness of vision models.\\\"\\n\\nPlease let us know if you have other concerns.\\n\\nThank you very much again for your valuable review.\"}",
"{\"metareview\": \"The manuscript introduces CNS-Bench, a benchmarking framework utilizing LoRA adapters and diffusion models to evaluate vision model robustness under continuous nuisance shifts. While the paper is technically sound and offers a creative application of generative models, it does not provide compelling evidence of substantial novelty or practical utility. Below are key points summarized from reviewer feedback:\", \"strengths\": [\"Technical Soundness: The application of LoRA adapters and generative models is well-executed, with thorough evaluations of generated image quality and classifier robustness.\", \"Innovative Approach: The paper proposes a unique method for realizing continuous nuisance shifts using generative models, which could inspire further research.\", \"Detailed Analysis: The authors conduct robust experiments, including failure point analyses, and address feedback with significant manuscript improvements.\"], \"weaknesses\": [\"Limited Novelty: The methodology primarily combines existing techniques without substantial innovation. The benchmark does not surpass existing alternatives like ImageNet-C in terms of practical utility or realism. Continuous nuisance shifts, while novel in implementation, do not demonstrate significant new insights or impact.\", \"Utility and Scope: The practical relevance of the benchmark is questionable, given modest performance drops even for challenging nuisance shifts. The generated images often fail to emulate realistic OOD scenarios, particularly for weather-based shifts like fog and rain.\", \"Lack of Clear Impact: The paper fails to articulate a strong \\\"claim to fame\\\" or compelling use case. Model rankings remain largely consistent across scales, reducing the value of continuous shift evaluation.\", \"Presentation Issues: The manuscript lacks clarity in critical sections (e.g., methodology). Figures and metrics (e.g., accuracy drop) were initially unclear, though these were partially addressed during the rebuttal.\", \"While the manuscript demonstrates solid technical work and engages effectively with reviewer feedback, it lacks the novelty, clarity, and demonstrated utility required for publication at ICLR. The benchmark\\u2019s incremental contribution and limited practical relevance suggest that it is not ready for acceptance in its current form.\"], \"additional_comments_on_reviewer_discussion\": \"Key Points Raised by Reviewers and Authors\\u2019 Responses:\", \"novelty_and_utility\": [\"Point: Limited innovation; unclear practical utility of continuous shifts.\", \"Response: Authors argued that the benchmark offers new ways to analyze robustness with realistic nuisance shifts. Highlighted insights on model ranking changes across scales.\"], \"outcome\": \"Reviewers remained unconvinced about practical impact, citing marginal utility.\", \"realism_and_quality_of_generated_images\": [\"Point: Images (e.g., rain, fog) do not represent challenging OOD scenarios; confounding effects noted in generated images.\", \"Response: Authors acknowledged limitations and added discussions about biases and confounders in generative models.\", \"Outcome: Reviewers appreciated acknowledgment but emphasized limited relevance of such shifts.\"], \"clarity_and_presentation\": [\"Point: Poor clarity in methodology (e.g., failure points, calibration of shifts).\", \"Response: Authors revised key sections, improved figure captions, and added confidence intervals to results.\", \"Outcome: Reviewers noted improvements but retained concerns about the benchmark\\u2019s overall coherence.\"], \"benchmark_evaluation_metrics\": [\"Point: Questions on relevance of metrics like accuracy drop and failure points.\", \"Response: Clarified metrics and removed averaged failure points across shifts.\", \"Outcome: Clarifications addressed confusion but did not substantially change reviewers\\u2019 impressions.\"], \"scope_of_contribution\": [\"Point: Benchmark\\u2019s contributions overlap with existing methods (e.g., ImageNet-C).\", \"Response: Authors emphasized scalability and unique application of LoRA adapters.\", \"Outcome: Reviewers accepted this as a methodological contribution but viewed it as incremental.\"]}",
"{\"comment\": \"Dear Authors, \\\\\\n\\nthank you for the clarifying response about the y-axis scale for the accuracy drop, I am relieved.\\n\\n\\u201cplease note that their application already significantly reduces the performance of the classifiers (e.g., 20% for ResNet-50 for snow)\\u201d \\\\\\nAccording to figure 9, the accuracy of the (ImageNet-trained) RN-152 hardly changes up until scale 1.5 for corruptions snow, fog, smog and rain. Even under the correct interpretation of the y-axis, this seems like a rather small reduction in accuracy. But fair point, the fact that the accuracy changes at all for the larger scales potentially reveals that models rely on spurious background features. \\n\\n\\u201cWe agree with the concern of the reviewer and removed results with averaged failure points.\\u201d \\\\\\nI appreciate this - calculating FPs only for each shift seems more reasonable.\\n\\n\\u201cTo address your original comment whether model rankings actually change significantly, we perform a statistical test\\u201d \\\\\\nThank you for conducting this analysis, good idea! \\n\\n\\u201cNow, we also updated section Sec. 3.1\\u201d \\\\\\nThanks, looks better!\\n\\nTo summarize my final opinion on the paper, I think that the suggested methodology, while not entirely novel, could prove to be useful, especially to simulate certain new distribution shifts that would be otherwise hard to simulate. This warrants an increase of my score to a 6, even though the benchmark itself still fails to convince me, because its marginal utility seems fairly low -- mainly because the images just seem too easy and the drops in accuracy (even if correctly interpreted; my apologies for misunderstanding this earlier) are rather modest, even for ImageNet-trained models. I'm afraid that in the era of models trained on web-scale datasets, the utility of this specific benchmark might be rather low and I doubt that it will see much use, but the methodology is valuable work that others could build on.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper introduces a benchmark, CNS-Bench, composed of synthetic images with gradual and continuous nuisance, to evaluate the robustness of classifiers in detail. The images are generated using Stable Diffusion, incorporating a wide range of individual nuisance shifts with continuous severities through LoRA adapters to diffusion models. This paper provides a detailed evaluation and analysis of various classifiers' behavior on CNS-Bench, emphasizing the advantage of utilizing generative models for benchmarking robustness across diverse continuous nuisance shifts.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-motivated. Understanding the robustness of models to nuisances of varying degrees is crucial.\\n2. It is reasonable to generate images with gradual and continuous nuisance using Stable Diffusion and LoRA adapters.\\n3. The experimental section evaluates various classifiers, providing a better understanding of the robustness capabilities of these classifiers.\", \"weaknesses\": \"See questions.\", \"questions\": \"1. I don\\u2019t understand the failure point concept in Section 3.2. This section may contain many symbols that are confusing, such as: $X_n(S), X(s_n),X_n(S_n)$ , and the subscripts in $s_n, c_n$.\\n2. In Section 4, the paper mentions \\\"activate the LoRA adapters with the selected scale for the last 75% of the noise steps\\\". Could you provide some theoretical or empirical evidence to justify the rationale for adjusting LoRA for the last 75% of the noise steps?\\n3. In Section 4.2, the paper mentions \\\"fine-tune ResNet-50 with our data and show more than 10% gains on ImageNet-R\\\". Was the data used for fine-tuning the entire CNS-Bench or a specific style within it (such as a style closely resembling ImageNet-R distribution)? In Table 3, I noticed that after fine-tuning, the model accuracy on IN/val decreased by 2.04%. I believe the results in Table 3 do not fully support the claim regarding \\\"the realism of generated images.\\u201d\\n4. For experiment about the relation between ID and OOD accuracy in section 4.3\\uff0cplease further elaborate on the rationale for using the slope of the linear fit between ID and OOD accuracies and the significance represented by this slope. Why not use the linear correlation coefficient\\uff1fFurthermore, please provide a more detailed analysis of the results in Figure 7, particularly elucidating the impact of the strength of nuisance on the relation between ID and OOD accuracy.\\n5. Figures 6a and 6b evaluate the accuracy drop. I do not think this metric rational because the model size and performance on the ImageNet validation set may not necessarily align. This mismatch could result in accuracy drops of different models that are not directly comparable. Please provide the model's parameter count and the model accuracy on IN/val for reference or other evidence to claim rationality the accuracy drop.\\n6. Figures 4 and 5 assess using accuracy, while Figure 6 employs accuracy drop. Could you standardize to a single metric for consistency throughout the text?\\n7. ImageNet-C also contains images with nuisances of different strengths. What are the distinctions between CNS-Bench and ImageNet-C?\\n8. Could you give some experiment details of the claim \\u201cthe alignment for one given seed increases in 73% for scales s > 0 for all shifts in our benchmark\\u201d in Section 3.2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your response and adding the limitations to the manuscript. I have updated my score and I'm leaning towards acceptance.\"}",
"{\"title\": \"Response 2 to reviewer LxGf\", \"comment\": \"> Figures 6a and 6b evaluate the accuracy drop. I do not think this metric rational because the model size and performance on the ImageNet validation set may not necessarily align. This mismatch could result in accuracy drops of different models that are not directly comparable. Please provide the model's parameter count and the model accuracy on IN/val for reference or other evidence to claim rationality the accuracy drop.\\n\\nOur benchmark mainly focuses on the accuracy drop to measure the relative OOD robustness of a model or the performance degradation for the case of nuisance shifts, following Hendrycks et al. (2018,2021). But we agree that there exists a relation between ID and OOD accuracy. We considered and reported the relation between ID and OOD accuracy in Fig. 7. To additionally address your comment, we report the ImageNet validation accuracies and the model parameter counts in Tab. 5.\\nWhile the model parameter count purposefully varies for the *model size* axis, we use the same ViT-B backbone for the *pre-training* axis, and a comparable number of parameters for the *architecture* axis. To further address your question, we additionally compute the accuracy gain after subtracting the effect of an improved OOD accuracy taking into account the linear fit that we discussed previously. We plot the results along the model architecture axis in Fig. 19. We find that ConvNext achieves the best accuracy gain after removing the linear dependency of an improved ID accuracy. However, we underline that this measure needs to be considered cautiously since the (1) linear fit depends on the selected models for computing the statistics and (2) the linear fit might not always describe the relationship sufficiently well. Consider, e.g., the discussions in '_Accuracy on the Curve: On the Nonlinear Correlation of ML Performance Between Data Subpopulations_' by W. Liang (2023) or in 'Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation' by A. Sanyal (2024).\\nTherefore, we would rather argue for consistently using the accuracy drop as a measure of robustness. We are happy to get your feedback on this point.\\n\\n> Figures 4 and 5 assess using accuracy, while Figure 6 employs accuracy drop. Could you standardize to a single metric for consistency throughout the text?\\n\\nThanks for pointing out that inconsistency. We updated the figures accordingly.\\n\\n> What are the distinctions between CNS-Bench and ImageNet-C?\\n\\nImagenet-C does not contain real-world shifts but only simple corruptions, which is a fundamentally different nuisance shift, not capturing all realistic distributions shifts.\\nWe also refer to the discussion in the general comment (i).\\n\\n> Could you give some experiment details of the claim \\u201cthe alignment for one given seed increases in 73% for scales s > 0 for all shifts in our benchmark\\u201d in Section 3.2?\\n\\nTo evaluate whether the application of our sliders yields an increase of the desired shift, we compute the CLIP alignment of the generated image to the text prompt describing the shift. Increasing the scale $s$ by 0.5 increases the CLIP alignment in 73% of the cases. This shows that the shift is increased in the majority of the cases when relying on the CLIP alignment score. An explanation for cases where the CLIP score does not increase can be that the applied shift by the LoRA slider does not exhibit the characteristics that CLIP measures for that shift although the change can be visible for a human. We illustrate such a case in Fig. 16, where the painting shift can be observed but the CLIP alignment to the shift drops.\\nWe moved this part to Sec. A.5.4 in the supplementary and added explanations there.\"}",
"{\"title\": \"Response to reviewer 8WvD\", \"comment\": \"> The novelty of the insights presented in this paper could be more compelling. For example, in Figure 6, are there any underlying reasons or mechanisms that could provide a deeper understanding of the results?\\n\\nThank your for pointing out that our discussions were not clearly stating the novelty of our observations. We try to formulate the key insight of our benchmark in the following more clearly.\\n\\nFirst of all, we find that there is not a single model that rules all realistic shifts and scales equally well.\\nWhen averaging the performance over all shifts, model rankings do not heavily change for different scales (Fig. 6a). This states that a robust model that is robust on a weakly-shifted OOD dataset A tends to be robust as well on heavily-shifted dataset B. However, considering this average metric is not sufficient to evaluate the robustness of a model on a specific OOD scenario: When comparing the performances for individual nuisance shifts, the model rankings can significantly change for different scales. This effect depends on the considered shift and models. E.g., we observe that the effect of weather variations, such as rain or fog, results in varying performance for different scales and shifts but increasing the scale does not significantly change the model rankings (Fig. 9). This is similar to ImageNet-C, where different corruptions lead to different performance drops over varying scales (Fig. 22). However, some style changes impact the models clearly differently for various scales (Fig. 6b and 9). We believe that this might be attributed to the effect that different models focus on different characteristics of the classes, which are modulated differently at different scales of the shift, which, we think, is a note-worthy novel finding.\\n\\nWe hope, this addressed your comment appropriately. Depending on your feedback, we will update the discussions in the main paper accordingly.\\n\\n> There is a lack of clarity on how the dataset handles potentially unrealistic or counter-intuitive scenarios, such as cars driving on water.\\n\\nWe believe there are two points to address this question. Our benchmark follows the statistics of the training data of the generative model, which captures the distribution of available images. Unrealistic cases can however still happen but it is typically hard to automatically differentiate between edge cases and physically implausible or unrealistic cases. Consider, e.g., the presented examples in Fig. 30 that relate to the example of cars driving on water.\\n\\nNevertheless, it could be argued whether such unrealistic or counter-intuitive scenarios are problematic as long as the class is still recognizable since humans also generalize to edge cases. It might be even a motivation to use generative models for benchmarking to generate edge cases that only rarely occur in real world scenarios. Thank you for raising this point.\\n\\nWe hope our answer addressed your question appropriately. Please feel free to follow up if you have any remaining questions or concerns.\"}",
"{\"title\": \"Response to reviewer xh13\", \"comment\": \"> Unclear contributions: The contributions listed in the paper seem overlapping.\\n\\nThank you for pointing out this. We re-structured our contributions differently to reduce the overlap and we refer to the common response (ii). The new structure is that we separate (1) the framework for continuous shifts, (2) the OOC filtering strategy, and (3) the evaluation and analysis of various models.\\n\\n> While the paper claims 14 distinct nuisance shifts as a key contribution, it lacks an explanation or rationale for selecting these specific shifts. \\n\\nWe added the motivation for the 14 shifts in the introduction and in the experimental details. In short, we selected those diverse shifts as an example application of our approach, mainly inspired by ImageNet-R (8 shifts) and real-world weather shifts (6 shifts). Due to the scalable nature of our generative benchmark, our framework can be used for computing the robustness with respect to other shifts that can be expressed via a text prompt or a set of images as well. We are eager to test others shifts that the reviewers might have in mind.\\n\\n> Ambiguity in benchmark superiority : The authors assert that their benchmark outperforms existing benchmarks for evaluating model robustness by incorporating nuisance shifts across multiple severity levels. However, earlier works by Hendrycks & Dietterich (2018) and Kar et al. (2022) already support multi-severity analysis for vision model failure points. Thus, the authors should clarify how their benchmark framework distinctly advances beyond these existing approaches.\\n\\nWhile the mentioned works allow multiple severity levels, they are restricted to synthetic or simple semantic corruptions. Our approach allows real-world distribution shifts and can be easily scaled to other shifts. Applying diverse and realistic natural real-world shifts is fundamentally more challenging and motivates the application of generative models. We do not consider our work as superior but complementary to the benchmarks that support simple multi-scale corruptions.\\n\\nWe also refer to our common response (i).\\n\\n> Inconsistent statements on model robustness: In line 451, the authors claim that transformers are more robust than CNNs, yet this statement seems contradicted by Fig. 6a, where ConvNext outperforms ViT and DeiT but performs slightly worse than DeiT3. This inconsistency suggests that CNNs may not always be less robust than transformers, and the statement should be re-evaluated or clarified.\\n\\nWe believe there may be a misunderstanding. Our findings indicate that CNNs generally perform worse than transformers when using modern training recipes like DeiT3, which we also tried to communicate. We noted in the paper: \\\"A modern CNN (ConvNext) outperforms transformers but is less robust than when using modern training recipes (DeiT3).\\\" (l.451 in the original version). We apologize for the confusion and we reformulated that paragraph for clarity.\\n\\n> Validation of realistic nuisance shifts: While the authors argue that the benchmark includes realistic nuisance shifts, the realism of these diffusion-generated images is not substantiated. Proper validation, such as human assessment, would enhance the credibility of this claim.\\n\\nThank you for raising that important point. We refer to the common response (iii) for a discussion of our strategies to account for the realism. Specifically, following your advice, we also validated the realism of our images through human assessment, which we document in Sec. A.7. One percent of our benchmarking samples were detected to be not a sample of the desired class according to our user study where each image was checked by two different individuals.\\n\\n> Readability of figures: The font size in several figures is too small, which detracts from readability. Increasing the font size would improve clarity for readers.\\n\\nThank you for that comment. We updated the figures accordingly.\\n\\n> Self-supervised pre-training: Why is DINOv1 using linear probing compared with other models? This seems to create an unfair comparison, as linear probing may not fully reflect the robustness of self-supervised models relative to other models in the evaluation. Could you clarify the rationale behind this comparison approach?\\n\\nOriginally, we only compared using the publicly available linear-probed DINOv1 model since no fine-tuned model was available. To address your comment, we fine-tuned DINOv1 and updated the results and discussions accordingly (consider, e.g., Tab. 1,2,3 and Fig. 6). The fine-tuned DINOv1 model achieves a comparable performance to MoCov3 on CNS-Bench, with a better performance on small scales but achieving slightly worse results on larger shift scales.\"}",
"{\"comment\": \"Thanks for your detail rebuttal, I think most of my concerns are solved, I will change the score.\"}",
"{\"title\": \"Clarification of accuracy drop\", \"comment\": \"Thank you again for your response. We will address your comments individually:\\n\\n> [...] the accuracy drop you refer to in e.g. figure 5 is the absolute delta in classification performance, not the relative one. In other words, the accuracy of a ResNet-50, which should sit at a performance of about 80%, drops by 0.6 percentage points (e.g. to an accuracy of 79.4%) on the largest scale of your dataset, is that correct? Are you seriously proposing an \\\"OOD-benchmark\\\" for which even the hardest corruptions reduce the performance of a vanilla ResNet-50 by less than 1%? Please clarify this point.\", \"we_are_sorry_for_the_misunderstanding_and_we_would_like_to_clarify_the_scale_of_the_y_axis\": \"The y-axis is, indeed, the absolute performance drop. However, we do not report in % but in the value range [0,1] throughout our whole analysis. Therefore, the correct interpretation of the graph is that, for example, ResNet drops by 60 percentage points for the nuisance shift 'cartoon' (i.e. 0.6).\\nWe updated the caption of the figure accordingly and also mentioned it explicitly in the experimental details (*Metrics*) to ensure a proper understanding.\\n\\n> The \\u201cdust\\u201d, \\u201crain\\u201d, \\u201csandstorm\\u201d, \\u201cfog\\u201d, and \\u201csnow\\u201d categories do in my opinion not pose an OOD scenario at all, since they do not even occlude the object, but merely change the background slightly.\\n\\nOcclusions of objects are indeed an interesting OOD factor to consider. However, OOD scenarios can also include many other factors of distribution shifts, e.g., texture variations, high frequency corruptions or background changes. \\nWhile the applied nuisance shifts in terms of weather conditions are sometimes not as drastic as one might expect, please note that their application already significantly reduces the performance of the classifiers (e.g., 20% for ResNet-50 for snow) and, hence, clearly induces an out-of-distribution shift of the data. Please also consider that the presented subset includes images with varying scales and some images actually include (partial) occlusions.\\n\\n> This came as a surprise to me, because I don't see how these images would be harder for any reasonable model than uncorrupted images.\\n\\nIt is a known phenomenon that classifiers make mistakes that are rather unexpected for humans. For example, it was shown that classifiers can fail to recognize objects due to simple changes that would not fool a human observers, such as changes in the object pose [A], the background [B], or the texture of the object [C]. This observation is commonly attributed to the \\\"shortcut learning\\\" phenomenon [D], where models rely on simple, often misleading patterns in the data, rather than understanding the underlying concepts. \\nIn this context, our framework is the first that enables the systematic evaluation with respect to diverse and continuous real-world nuisance shifts, providing deeper insights into model robustness.\\n\\n[A] Michael A. Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, and Anh Nguyen. \\\"Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.\\n\\n[B] Rosenfeld, Amir, Richard Zemel, and John K. Tsotsos. \\\"The elephant in the room.\\\" arXiv preprint arXiv:1808.03305. 2018.\\n\\n[C] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. \\\"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.\\\" International Conference on Learning Representations. 2018.\\n\\n[D] Robert Geirhos, J\\u00f6rn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. \\\"Shortcut learning in deep neural networks.\\\" Nature Machine Intelligence 2.11: 665-673. 2020.\"}",
"{\"title\": \"Answers to further remarks\", \"comment\": \"> Calibrating shifts is difficult [...] I agree, hence my question about this - I don\\u2019t think it is legitimate to calculate the failure point by averaging over the different shifts, since the different shifts could be on very different failure scales.\\n\\nWe agree with the concern of the reviewer and removed results with averaged failure points (Tab. 4 in the previous manuscript). Please note that we motivated the failure point computation for individual shifts (l. 465-472). \\n\\n\\n> I appreciate this, and find the confidence intervals smaller than expected (I assume this is a 95% confidence interval?)\\n\\nThe plots in Fig. 13 and 14 depict the one-sigma confidence interval, as mentioned in the caption. To address your original comment whether model rankings actually change significantly, we perform a statistical test: We test whether the estimated accuracy drops are significantly different using the two proportion z-test and a p-level of 0.05. For example, for the painting style nuisance we find that the accuracy drop differences between RN152 and ViT, ConvNext and ViT, DeiT and ViT, DeiT3 and ViT significantly change the sign when increasing the severity of the shift.\\nWe further checked which shifts result in significant model ranking changes for the considered models along the architecture axis. We observe such statistically significant changes for the following shifts: painting style, the style of a tattoo, heavy sandstorm.\\n\\n\\n> Currently, section 3.1 still is not self-containing, as somebody unfamiliar with the work of Vendrow et al 2023 and Gal et al 2023 will have a hard time understanding what was done. It would not hurt to explain a bit more of what was done here. In line 194 there\\u2019s a \\u201cthat\\u201d which doesn\\u2019t belong there.\\n\\nFollowing your initial feedback, we specifically addressed the criticism about the missing clarity of Sec. 3.2. Now, we also updated section Sec. 3.1. We hope this revised version is clearer and we are looking for your feedback. We are willing to add a more formal explanation of the diffusion model objective if you think, this should not be missing.\\n\\nPlease let us know if you see potential for more clarity in other parts of the paper. We are happy to address these points as well. Thank you for your thorough and constructive feedback!\\n\\n> That\\u2019s interesting, thanks. What I was wondering about was why the FP ratio would decrease for the larger scales again (I would have expected the results to look like your table does, with more mis-classifications at the larger scales).\\n\\nThe failure point distribution quantifies when a classifier *starts* failing (per scale), which rather relates to the performance drop with respect to the previous scale. In contrast, the presented table measures the absolute accuracy drop per scale with respect to scale 0. In that case, the drop from scale 1 to scale 1.5 is the largest (0.13-0.02=0.11). That means, the classifier starts failing more often at intermediate scales for such shifts.\"}",
"{\"summary\": \"This paper introduces CNS-Bench, a benchmark for evaluating the robustness of image classifiers to what the authors call \\\"continuous nuisance shifts\\\" - essentially OOD distortions like no snow -> snow along a continuous axis. CNS-Bench uses LoRA adapters applied to diffusion models to generate images with a wide range of nuisance shifts at various severities. While in principle continuous shifts are possible, most of the article nevertheless focuses on a fixed number of shifts (5 severity levels). The authors then conducted an evaluation of few different visual image classifier families on CNS-Bench.\\n\\nThe paper's contributions are defined, by the authors, as follows:\\n1. The creation of CNS-Bench & evaluation of models\\n2. The collection of an annotated dataset for filtering (note: this is a process that becomes necessary since the approach used in the paper may alter the class label, therefore this essentially fixes an issue introduced by the approach)\\n3. The publication of 14 nuisance shifts at five severity levels. (note: this is essentially part of #1)\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Authors promise to release the dataset under a permissive licences (CC-BY-4.0); code is available from supplementary material via Google Drive.\", \"I like the approach of measuring a precise failure point. In psychophysics, a related concept is called the threshold of a model - see, e.g., Figure 4 of this 2017 paper on \\\"object recognition when the signal gets weaker\\\": https://arxiv.org/pdf/1706.06969. A threshold is calculated across many samples; the failure point described in this article, in contrast, is the point where an individual test sample is no longer correctly recognized.\", \"The technical approach is a nice, simple and creative application of generative diffusion models.\"], \"weaknesses\": [\"1. Nuisance shifts affect information that's not related to the nuisance concept. In Figure 22 and 23, some nuisance shifts don't achieve the desired result; e.g. the variation \\\"in rain\\\" (Fig 23f) alters/blurs the background without occluding the object through rain. **Some nuisance shifts introduce confounds**, e.g. \\\"in dust\\\" not only adds dust but also removes half of the people in the image and changes a person's shirt color from red to black. As a consequence, failures cannot be attributed to the nuisance concept itself.\", \"2. The approach is based on generative models, thereby introducing a **real vs. synthetic distribution shift** that may further influence results. A discussion - better yet: an analysis - of this likely confound is recommended. Without this, I'm hesitant to share the author's hope that (\\\"this benchmark can encourage the community to adopt generated images for evaluating the robustness of vision models.\\\").\", \"3. **The paper's main claim to fame remains a bit unclear to me**, and that's my most important concern. At the same time, this might be the biggest opportunity for improvement and clarification from which future readers might benefit. The authors propose a variety of options to choose from, but I'm not convinced (yet - happy to be convinced of the opposite). Specifically:\", \"Is it about continuous shifts? If so, this can be achieved with parametric distortions too (e.g. Gaussian noise with noise strength as a continuous parameter). Furthermore, the authors end up narrowing it down to 5 severity levels anyways, which is roughly in line with the 5-8 levels from related work.\", \"Is it about a large number of distortions? Probably not, since the dataset's 14 distortions are in the same ballpark as ImageNet-C (15 test + 4 validation distortions) or model-vs-human (17 distortions).\", \"Is it about testing a variety of models? While a number of model families are investigated (CLIP, ConvNext, Deit, Dino, MAE, MOCO, ResNet, ViT) that's also similar to previous investigations, some of which tested a broader variety.\", \"Is it about identifying failure cases? If so, when is it important to know about a specific failure case (as opposed to a model's threshold, averaged across many samples)?\", \"Is it about the connection between architecture and robustness? The observation that architecture influences model robustness has been reported (extensively) by a range of previous work.\", \"Is it about precise control? While strength can be controlled, the effect introduced by the nuisance can't be controlled to a level where no confounds would be introduced, as seen in Figures 22 & 23.\", \"Is it about scalability? If so, why is training separate LoRA adapters for each ImageNet class and shift more scalable than existing approaches?\", \"Is it about real-world nuisance shifts? If so, see concern #2 on the real vs. synthetic distribution shift.\", \"I recommend that the authors clearly state and justify what they believe is the primary novel contribution (\\\"claim to fame\\\") of their work, and how it advances the field beyond existing benchmarks and approaches.\"], \"questions\": [\"If it should be a benchmark, people will want to know: who won? What's the score that I need to beat in order to be SOTA? Tables with an overall score would help. Table 1 is a step in the right direction but it's not immediately clear which model is best and which score (Accuracy? Accuracy drop?) is the benchmark score.\", \"Why were those 14 \\\"nuisances\\\" chosen and not others, why 14 and not, say, 50? (Not saying that the authors should do this but asking out of curiosity)\", \"What's the robustness of a failure point to random variation?\", \"Is performance (accuracy) always a monotonous function of the LoRA slider strength? Are there instances when that's not the case? If so, what does it mean if there are images beyond the failure point that are again correctly recognized?\", \"line 43: \\\"such approaches are not scalable\\\" - why not? If one takes a large dataset and applies cheap corruptions like the ones from ImageNet-C, should that be considered less scaleable?\", \"What's the computational cost of generating the dataset?\"], \"misc\": [\"Figure 7: instead of re-using colors that were used in e.g. Figure 6 with a different association, I'd recommend using different colors here to avoid confusion - ideally a sequential color palette, with the legend sorted by scale not arbitrarily. Also, label 1.5 appears twice which is probably not intentional.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer LxGf\", \"comment\": \"> I don\\u2019t understand the failure point concept in Section 3.2.\\n\\nWe revised that section and we removed the indices for a more simplistic notation. We are happy to get your feedback on the updated version. In essence, the failure point distribution captures at what shift scale a classifiers fails how often. Mathematically, we define the failure point distribution through a histogram that counts the number of failure cases that occur at one of the shift scales that our benchmark considers.\\n\\n> Could you provide some theoretical or empirical evidence to justify the rationale for adjusting LoRA for the last 75% of the noise steps?\\n\\nOur goal was to maintain the coarse semantic structure of the image. Therefore, we do not activate at earlier timesteps. We mainly guided our parameter choice following Gandikota et al. (2023). Consider Fig. 13 in their supplementary for this, where they show that 75% results in a significant application of the slider concept with spatial structure preservation. Our initial experiments confirmed this. Varying this value to analyze different nuisance shifts can be an interesting step in the future.\\n\\n> Was the data used for fine-tuning the entire CNS-Bench? [...] after fine-tuning, the model accuracy on IN/val decreased by 2.04%.\\n\\nWe used the entire CNS-Bench for fine-tuning. The reduced performance could be attributed to the fact that the model has to stop considering details it has learned for differentiating ImageNet classes, to improve the ImageNet-R performance. However, stylization requires less focus on the texture, which can eventually deteriorate the ImageNet performance. We refer to the common response (iii) for a discussion of the realism of the generated images.\\n\\n> please further elaborate on the rationale for using the slope of the linear fit between ID and OOD accuracies and the significance represented by this slope. Why not use the linear correlation coefficient?\\n\\nOur analysis of the relation between ID and OOD accuracy was mainly guided by the accuracy-on-the-line discussions (Miller, 2021), who find that a linear fit often captures the dependency between a ID and OOD accuracy surprisingly well. The linear slope is motivated since it quantifies which improvement of the OOD accuracy can be achieved on average when improving the ID accuracy. It helps better understanding whether an increase in OOD accuracy can be explained by a larger ID accuracy. Fig. 17 shows that most linear fits are statistically significant. To address your comment, we additionally computed the linear correlation coefficient and plot it in Fig. 18. \\n\\n\\n> Furthermore, please provide a more detailed analysis of the results in Figure 7.\\n\\nThe results in Fig. 7 show that the slope varies for different (1) shifts and (2) scales.\\n(1) This is in line with the discussions by Miller et al. (2021): The slope varies for different datasets, i.e. the effect of a better OOD accuracy with an improved ID accuracy is not consistent across shifts. \\n(2) The smaller slope for lower scales can be attributed to the subset of classes that we consider in our benchmark: A delta accuracy increase on all ImageNet classes leads to a smaller delta accuracy increase for our selected generated subset. We illustrate two examples of such a linear fit in Fig. 19. However, when the distribution shift is more prominent, an improved ID accuracy leads to a more prominent improved performance of the OOD accuracy. Our strategy allows systematically studying the effect of different distributions shifts for the relation between ID and OOD accuracy.\"}",
"{\"title\": \"What exactly do you mean by accuracy drop?\", \"comment\": \"Dear Authors, \\\\\\nthank you for your detailed response and for providing the more easily accessible example images. I find them very helpful to better understand the quality of the generated distribution shifts. For some shifts, I find the quality of the images convincing (e.g. cartoon style). **However, the \\u201cdust\\u201d, \\u201crain\\u201d, \\u201csandstorm\\u201d, \\u201cfog\\u201d, and \\u201csnow\\u201d categories do in my opinion not pose an OOD scenario at all, since they do not even occlude the object, but merely change the background slightly.** This came as a surprise to me, because I don't see how these images would be harder for any reasonable model than uncorrupted images. This made me realize that the *accuracy drop* you refer to in e.g. figure 5 is the **absolute delta in classification performance**, not the relative one. In other words, the accuracy of a ResNet-50, which should sit at a performance of about 80%, drops by 0.6 *percentage points* (e.g. to an accuracy of 79.4%) on the largest scale of your dataset, is that correct? Are you seriously proposing an \\\"OOD-benchmark\\\" for which even the hardest corruptions reduce the performance of a vanilla ResNet-50 by less than 1%? Please clarify this point, as otherwise I see myself **forced to change my rating to a high-confidence clear reject, since the method simply does not work**.\\n\\n\\u201cCalibrating shifts is difficult or subjective and maybe even impossible across different shifts\\u201d \\\\\\nI agree, hence my question about this - I don\\u2019t think it is legitimate to calculate the failure point by averaging over the different shifts, since the different shifts could be on very different failure scales.\\n\\n\\u201cwe plot the confidence intervals in Fig. 13 and 14\\u201d \\\\\\nThank you! I appreciate this, and find the confidence intervals smaller than expected (I assume this is a 95% confidence interval?)\\n\\n\\u201cFig. 16 (Fig. 26 in the updated manuscript) shows failure cases of the naive strategy for achieving continuous shifts\\u201d \\\\\\nMy apologies, I had missed that. Thank you for also plotting the confidence intervals in figure 3. \\n\\n\\u201cWe rewrote that section [3.1] and are open to any additional feedback about that part.\\u201d \\\\\\nCurrently, section 3.1 still is not self-containing, as somebody unfamiliar with the work of Vendrow et al 2023 and Gal et al 2023 will have a hard time understanding what was done. It would not hurt to explain a bit more of what was done here. In line 194 there\\u2019s a \\u201cthat\\u201d which doesn\\u2019t belong there. \\n\\n\\u201ca significant part of wrong classifications (>50%) can be, indeed, attributed to the comic-book class\\u201d \\\\\\nThat\\u2019s interesting, thanks. What I was wondering about was why the FP ratio would decrease for the larger scales again (I would have expected the results to look like your table does, with more mis-classifications at the larger scales).\\n\\n\\u201cYes, our LoRA adapters can be composed, as demonstrated in an example application in the supplementary material (Fig. 29)\\u201d \\\\\\nCool, thank you!\\n\\n\\u201cAdditionally, we would like to highlight a broader aspect of our method: it is not limited to nuisance shifts specified via text.\\u201d \\\\\\nI agree that this is a compelling feature of your method.\"}",
"{\"title\": \"Response 2 to reviewer zdn7\", \"comment\": \"> Why were those 14 \\\"nuisances\\\" chosen and not others, why 14 and not, say, 50?\\n\\nWe selected those diverse shifts as an example application of our approach, mainly inspired by ImageNet-R (8 shifts) and real-world weather shifts (6 shifts). Due to the scalable nature of our generative benchmark, our framework can be used for computing the robustness with respect to other shifts that can be expressed via a text prompt or a set of images as well. We are eager to test others shifts that the reviewers might have in mind.\\n\\n> What's the robustness of a failure point to random variation?\", \"we_are_not_entirely_sure_if_we_understood_this_question_correctly_and_are_happy_to_clarify_further_if_we_did_not_address_this_point_accurately\": \"Which random variation does the reviewer refer to?\\n\\nDifferent generator seeds lead to different generated images whereas for a given seed and scale the resulting image generation is deterministic. Therefore, we assume the reviewer refers to situations where a model classifies wrongly for an earlier scale but correctly for a later scale. The failure point metric is based on the first failing scale, comparing models on individual starting images to determine when the model begins to fail. If a model correctly classified a larger scale after having failed for a smaller scale, the cumulative number of failure points would be higher than the accuracy drop.\\n\\n> Is performance (accuracy) always a monotonous function of the LoRA slider strength? Are there instances when that's not the case? If so, what does it mean if there are images beyond the failure point that are again correctly recognized?\\n\\nWhile the effect of the LoRA adapter is a monotonous function, it can effect the evaluated classifiers differently. Therefore, we observe few cases where a model fails first for lower scales and classifies correctly again for larger scales. Here, the failure point metrics differs from the average accuracy drop in a way that it is more sensitive to slight variations: We purposefully define this in a way that the first failure case is considered. The failure point means that there was no mis-classification before that scale. \\n\\n> line 43: \\\"such approaches are not scalable\\\" - why not? If one takes a large dataset and applies cheap corruptions like the ones from ImageNet-C, should that be considered less scaleable?\\n\\nImageNet-C corruptions are very simple and fundamentally different to real-world distribution shifts.\\nOur approach is different to the proposed strategy since it can achieve diverse and more realistic distribution shifts without relying on manually annotated datasets, such as (Zhao et al., 2022). Therefore, scalability not only refers to the number of classes and samples but in particular to the number and complexity of distributions shifts that can be applied.\\n\\n> What's the computational cost of generating the dataset?\\n\\nTraining the LoRA adapters took 2000 GPU hours and generating the images took around 350 GPU hours. We also refer to Sec. A.5.6.\\n\\n> Figure 7: [...] I'd recommend using different colors here to avoid confusion.\\n\\nThanks for that note. We updated the figure accordingly.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Dear reviewer 1iK7,\\n\\nthank you for your response and your active engagement in our discussion. This clearly helped us improve our paper.\\nWe are grateful that you recognize the significance of our work and you agree with the other reviewers that our work represents a valuable contribution to the community.\\n\\nThe Authors\"}",
"{\"summary\": \"This paper introduces a novel benchmark dataset for evaluating model robustness by systematically controlling individual nuisance factors. The dataset allows for a precise assessment of the failure points of vision models, based on the severity of these controlled nuisance factors. The authors find that model rankings vary with changes in shift severity, and model architecture is a key factor in robustness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Instead of measuring average accuracy drop across all nuisance shifts, the authors consider evaluating model performance at specific levels of nuisance shifts, enabling a detailed analysis of failure points in vision models.\", \"weaknesses\": \"1. Unclear contributions: The contributions listed in the paper seem overlapping. The distinctions among them are insufficiently clear. Notably, the third contribution is not visible in the main text. While the paper claims 14 distinct nuisance shifts as a key contribution, it lacks an explanation or rationale for selecting these specific shifts. Since this is a foundational aspect of the contribution, detailed descriptions should be provided in the main text, not relegated to the appendix.\\n\\n2. Ambiguity in benchmark superiority: The authors assert that their benchmark outperforms existing benchmarks for evaluating model robustness by incorporating nuisance shifts across multiple severity levels. However, earlier works by Hendrycks & Dietterich (2018) and Kar et al. (2022) already support multi-severity analysis for vision model failure points. Thus, the authors should clarify how their benchmark framework distinctly advances beyond these existing approaches.\\n\\n3. Inconsistent statements on model robustness: In line 451, the authors claim that transformers are more robust than CNNs, yet this statement seems contradicted by Fig. 6a, where ConvNext outperforms ViT and DeiT but performs slightly worse than DeiT3. This inconsistency suggests that CNNs may not always be less robust than transformers, and the statement should be re-evaluated or clarified.\\n\\n4. Validation of realistic nuisance shifts: While the authors argue that the benchmark includes realistic nuisance shifts, the realism of these diffusion-generated images is not substantiated. Proper validation, such as human assessment, would enhance the credibility of this claim.\\n\\n5. Readability of figures: The font size in several figures is too small, which detracts from readability. Increasing the font size would improve clarity for readers.\", \"questions\": \"1. Self-supervised pre-training: Why is DINOv1 using linear probing compared with other models? This seems to create an unfair comparison, as linear probing may not fully reflect the robustness of self-supervised models relative to other models in the evaluation. Could you clarify the rationale behind this comparison approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer 1iK7\", \"comment\": \"> One fundamental weakness of the paper is the lack of motivation for why a robustness evaluation at different levels is important.\\n\\nWe refer to the common response (i) where we discuss that considering multiple scales allows a more nuanced study of model robustness.\\n\\n> Why is it interesting at which severity level a model fails, especially given that it\\u2019s unclear whether the corruption severity levels across different shifts and different classes are properly calibrated against each other?\\n\\nCalibrating shifts is difficult or subjective and maybe even impossible across different shifts, i.e., sand storm and painting, which are fundamentally different. Similarly, not all levels of different corruptions types in ImageNet-C can be directly compared. Therefore, our motivation for the failure point is grounded by the goal to check whether a model fails earlier or later than other models, which does not require a calibration across shifts but still enables us to already make important observations.\\n\\n> Of course, having a method of subjecting any training set to a natural distribution shift is great, but the Dataset Interface paper already achieves this.\\n\\nWe agree that the Dataset Interface provides a valuable basis for generative benchmarking. However, we believe that the contribution of being able to realizing continuous shifts is note-worthy since it allows a more nuanced and systematic study of robustness, showing that the ordering of models can change for different scales and shifts. Additionally, we enable more fine-granular changes by applying LoRA adapters. And lastly, we carefully analyze the effect of out-of-class samples and propose a more effective mechanism that better removes such OOC samples.\\n\\n> but I wonder why that matters, unless the ordering of models drastically changes across the different levels.\\n> I wonder whether the differences in figure 6b and 6c [...] are statistically stable.\\n\\nTo address your concern whether our findings are statistically stable, we plot the confidence intervals in Fig. 13 and 14, which shows that some of the depicted model rankings actually change significantly.\\nTherefore, we underline that aggregating the performance into one single metric removes relevant insights when evaluating the OOD robustness in a specific scenario.\\n\\n> If I had a dataset with a painting-corruption, how would I know what the corruption-scale of my dataset is, to then select the best model at that level?\\n\\nWe agree that our work does not address this question. While we focus on benchmarking along our pre-defined scales, estimating the shift scale of a specific image or dataset is a fundamentally different task that goes beyond the scope of our work. However, our trained LoRA adapter might be potentially applicable for this task, which is an exciting future direction. The adapters scales could be estimated using the standard diffusion noise prediction objective, providing a measure of the average shift scale of a dataset or even a single image.\\n\\n> And do I really care about the minuscule differences between models (<< 1% accuracy delta) at scale 1, or would I simply select the model that does best at the maximum scale?\\n\\nIdeally, one would choose a model that performs best for all scales. However, we, again, follow the same motivation as ImageNet-C (see common response (i)): If one chooses simply the model with the best performance for the largest scale, one might end up using a model that performs worse for all other scales.\\nLet's consider a specific example for the sandstorm shift (as in Fig. 34b): A model that only focuses on the front perspective of the head of the dog, will outperform a model at the last scale even though it might perform worse for most other images of that class.\\n\\n> While I appreciate that the authors included the failure cases in figure 16, they do make me wonder how reliably the method really works [...] It would be good to also add confidence intervals to figure 3.\\n \\nFig. 16 (Fig. 26 in the updated manuscript) shows failure cases of the naive strategy for achieving continuous shifts (interpolation of text embeddings) and explains why we did not pursue this simple strategy and, instead, worked with LoRA adapters. An example failure case of our method is depicted in Fig. 2, which our filter strategy removes. As stated in the common response (iii), we conducted a human assessment that showed that 1% of our images are not samples of the class. We hope this also addresses your concerns. We plot the confidence intervals for Fig. 3 in Fig. 15.\\n\\n> I would have liked to take a closer look at the images in the benchmark, but could not unzip the provided benchmark.zip file, apparently because the file was corrupted. I don't think it's an issue on my end, could you look into this?\\n\\nThe `benchmark.zip` file works on our end, but Google may show warnings with large files. We\\u2019ve uploaded a subset of the dataset in `CNS_dataset/example_imgs` on Google Drive. We hope this helps!\"}",
"{\"summary\": \"The paper introduces CNS-Bench, which uses generative models for benchmarking robustness across diverse continuous nuisance shifts by applying LoRA adapters to diffusion models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-structured, and the proposed CNS-Bench benchmark is simple yet effective for evaluating model robustness. The authors provide comprehensive discussions along three key dimensions\\u2014architecture, number of parameters, and pre-training paradigm\\u2014giving clear insights into the paper's findings.\", \"In addition to the proposed dataset for benchmarking model robustness, the authors present an annotated dataset to benchmark OOC) filtering strategies. They introduce a novel filtering mechanism that significantly improves filter accuracy, which is a notable contribution.\", \"The application of LoRA sliders to compute shift levels continuously is a particularly innovative and inspiring approach. This adds an interesting methodological contribution to the paper.\"], \"weaknesses\": [\"The novelty of the insights presented in this paper could be more compelling. For example, in Figure 6, are there any underlying reasons or mechanisms that could provide a deeper understanding of the results? It would be beneficial to explore these further to add depth to the conclusions.\"], \"questions\": \"There is a lack of clarity on how the dataset handles potentially unrealistic or counter-intuitive scenarios, such as cars driving on water. How are these cases addressed? A discussion on the handling of such edge cases would improve the comprehensiveness of the dataset.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Dear reviewer zdn7,\\n\\nwe appreciate your positive rating of our work. Thank you again for your very constructive feedback, which helped us sharpening our listed contributions.\\n\\nThe Authors\"}",
"{\"comment\": \"Thank you for the detailed responses. I think my main concerns regarding the contributions and technical details of this work were addressed. I will update the score.\"}",
"{\"summary\": \"This paper introduces a novel benchmark for evaluating the OOD robustness of vision models. The core idea is to build a system that can generate images from the training distribution, but with natural distribution shifts (like snow) applied *with continuous severity levels*, so that one can smoothly increase the degree of corruption. The authors achieve this by leveraging diffusion models conditioned on the training distribution in combination with LoRA adapters. The resulting benchmark does therefore not only yield scalar accuracy values, but performance curves for different models, relating the severity of the corruption to the drop in classification performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper seems technically sound and successfully combines different existing methods to achieve the stated goal of generating a benchmark of continuous distribution shifts. I appreciate the thorough analysis and sanity-checks, such as creating a large OOC-detection dataset to make sure that the proposed filtering mechanism works. The writing is mostly clear, although some questions remain (see below). As far as I can tell (although I'm not too familiar with generative models) the authors cite the relevant related work.\", \"weaknesses\": \"One fundamental weakness of the paper is the lack of motivation for why a robustness evaluation at different levels is important. I\\u2019m aware that ImageNet-C also offers different corruption levels, and I could maybe be convinced that having access to these levels is useful, but the analyses conducted here do not really achieve this: Why is it interesting at which severity level a model fails, especially given that it\\u2019s unclear whether the corruption severity levels across different shifts and different classes are properly calibrated against each other (see my question 4)? Of course, having a method of subjecting any training set to a natural distribution shift is great, but the Dataset Interface paper already achieves this. So the overall contribution of the paper is effectively \\u201conly\\u201d interpolating between uncorrupted images and fully corrupted images, but I wonder why that matters, unless the ordering of models drastically changes across the different levels. That does not seem to be the case overall, according to figure 6a, and I wonder whether the differences in figure 6b and 6c (where values are averaged over fewer trials) are statistically stable. Adding confidence intervals to these plots would help convince me that this is indeed a robust finding. But even if this were the case: If I had a dataset with a painting-corruption, how would I know what the corruption-scale of my dataset is, to then select the best model at that level? And do I really care about the minuscule differences between models (<< 1% accuracy delta) at scale 1, or would I simply select the model that does best at the maximum scale?\\nWhile I appreciate that the authors included the failure cases in figure 16, they do make me wonder how reliably the method really works, and whether this unreliability might explain the weird curves in figure 6c. It would be good to also add confidence intervals to figure 3, to give a better idea of the quality of the generated images (but see my question 2 about the y-axis values of figure 3).\", \"questions\": \"## Feedback\\n* I would have liked to take a closer look at the images in the benchmark, but could not unzip the provided benchmark.zip file, apparently because the file was corrupted. I don't think it's an issue on my end, could you look into this?\\n* I think the writing, especially in section 3.2 where the method is explained, could be improved quite a bit, also to render the paper more self-sustained - I found myself having to look up the referenced papers, even though the relevant parts could have been summarized in a few sentences. For example, how exactly the scale of the sliders works cannot be understood from this paper alone, one needs to read Gandikota et al. 2023.\\n* The legend of figure 7 is broken. The label for scale 1.5 appears twice and the values are not ordered.\\n* Minor point, but in figures 9 and 10 it might be better to share the y-axis for more comparability between the plots.\\n\\n## Questions\\n1. In line 197, shouldn\\u2019t $\\\\theta$ have both $c_t$ and $c_+$ in the subscript, like $\\\\theta_{c_t, c_+}$?\\n2. In figure 3, how is it possible that the difference of two cosine similarities, which should be <= 2, achieves values of up to 7.5?\\n3. In line 423, you write that an explanation for the abrupt change in failure rate of the cartoon style is the ImageNet class \\u201ccomic book\\u201d, but I don\\u2019t see why images would be mis-classified as comic books more for scale 1.5 than for scale 2 and higher. \\n4. Do you have any way of asserting that the severity levels of different shifts and different classes are actually calibrated, i.e. that scale 2.5 of an elephant in snow is the same level of corruption as a scale 2.5 zebra in fog? Since you are training different LoRAs for the different classes, I\\u2019m not sure if this will always be the case, but it might be desirable. (I guess one could calibrate this using the CLIP-distances\\u2026?)\\n5. In principle, could you combine different distribution shifts at the same time? E.g., modify the same image to both exhibit fog and snow?\\n\\n## Final Assessment\\nOverall, I\\u2019m a bit skeptical of the relevance of the contribution of the paper (see above) and could not check how the images in the benchmark look like, qualitatively. I propose to reject for now, but I'm curious to hear the perspectives of the other reviewers and would be willing to increase my score if they deem this work relevant, or if the authors can motivate the need for continuous shifts better.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Common response\", \"comment\": \"We thank all reviewers for their questions and constructive feedback. We are pleased that the reviewers recognize the \\\"particularly innovative and inspiring approach\\\" (8WvD), that \\\"understanding the robustness of models to nuisances of varying degrees is crucial\\\" (LxGf), that our \\\"technical approach is a nice, simple and creative application of generative diffusion models\\\" (zdn7), that our strategy is \\\"technically sounds\\\" (1ik7), and that the reviewers \\\"appreciate the thorough analysis and sanity-checks\\\" (1ik7).\\n\\nIn the following, we discuss common questions among the reviewers.\\n\\n> **(i)** Why is a benchmark supporting multiple scales relevant (1ik7) and why is our benchmark superior to ImageNet-C? (xh13,LxGf)\\n\\nImageNet-C, the pioneering work by Hendrycks & Dietterich (2018) has shown that it is important to consider multiple corruption scales. In particular, they showed that it is possible that a classifier A has a lower performance drop than classifier B, even though classifier A degrades more gracefully in the presence of corruptions, and hence might be preferable over classifiers that degrade suddenly. However, ImageNet-C and, similarly, 3D-CC by Kar et al. (2022) are restricted to synthetic or simple semantic corruptions and do not consider a variety of real-world distribution shifts. The motivation for considering multiple scales also holds for real-world nuisance shifts. Enabling a systematic of real-world nuisance shifts at multiple scales is the focus of our work.\\n\\n\\n> **(ii)** Claim to fame (zdn7) and unclear contributions (xh13)\\n\\nWe present our paper's claim to fame, i.e. our main contribution, as follows:\\nCNS-Bench allows testing the robustness of vision models with respect to fine-granular and continuous-scale real-world distribution shifts for the first time. This significantly extends the variety and diversity of robustness evaluations at multiple scales compared to ImageNet-C.\", \"our_main_new_finding_is\": \"There is not one single model that performs best on all distribution shifts and the model rankings can vary for different shifts and scales. Therefore, the selection of the considered shifts and their severities clearly influence the final model rankings when comparing averaged performances. Consequently, we underline the importance of applying nuisance shifts that are more specific to an OOD scenario of interest.\\n\\nFurther, we address an urging challenge for generative benchmarking by proposing an improved filtering mechanism for removing failed generated images.\\n\\nSince CNS-Bench allows applying continuous shifts, it also enables the computation of failure points for diverse distribution shifts that go beyond the analysis of simple corruptions.\\n\\nOur approach advances the field since we allow a more nuanced study of model robustness by applying controlled multiple-scale real-world distribution shifts.\", \"we_followed_the_advice_by_reviewer_xh1303_and_restructured_the_list_of_contributions_and_updated_the_manuscript_accordingly\": \"1) Benchmark for continuous shifts, 2) filtering strategy, 3) systematic evaluation.\\n\\n\\n> **(iii)** Realism of the generated images (xh13,LxGf)\\n\\nIn our work, we address the realism of the generated images in various ways:\\n\\n- First, we proposed an improved filtering mechanism to remove OOC samples. To parameterize the filtering, we collected a large manually labeled dataset. We illustrated that our automatically filtered dataset results in comparable accuracy estimate as the manually labeled dataset.\\n- Second, we purposefully performed the comparison with the OOD-CV dataset to compare the effect of real-world distribution shifts and our generative approach.\\n- Third, we fine-tuned a classifer on our data and achieved improved performance on the real-world IN-R dataset.\\n- Fourth, following the advice by reviewer xh13, we also conducted a user study to evaluate whether our filtered dataset contains images that do not represent the class and we discuss the results in Sec. A.7: The estimated ratio of out-of-class samples equals 1% with a margin of error of 0.5% for a one-sigma interval.\\n\\n\\n> Updating the manuscript\\n\\nWe started updating the manuscript according to your feedback, added new results, supporting figures and tables, and we will further improve it taking into account the reviewer's reactions on our rebuttal.\\nIn our answers, we provide references to the updated manuscript if not stated differently.\"}",
"{\"title\": \"Response 2 to reviewer 1iK7\", \"comment\": \"> I think the writing, especially in section 3.2 where the method is explained, could be improved quite a bit.\\n\\nWe rewrote that section and are open to any additional feedback about that part.\\n\\n> The legend of figure 7 is broken. [...] in figures 9 and 10 it might be better to share the y-axis for more comparability between the plots.\\n\\nThanks for pointing out that. We addressed the point in the updated manuscript.\\n\\n> Subscripts in line 197\\n\\nWe reworked the whole paragraph and we hope this addresses your comment.\\n\\n> In figure 3, how is it possible that the difference of two cosine similarities, which should be <= 2, achieves values of up to 7.5?\\n\\nThank you for raising this inconsistency. For this plot, we multiplied the cosine similarities by 100. We updated the plot accordingly.\\n\\n> In line 423, you write that an explanation for the abrupt change in failure rate of the cartoon style is the ImageNet class \\u201ccomic book\\u201d, but I don\\u2019t see why images would be misclassified as comic books more for scale 1.5 than for scale 2 and higher.\\n\\nFirst, we quantitatively evaluated our empirical observations and we report the ratio of classes that were wrongly classified as comic book for the cartoon shift:\\n\\n| Scale | Ratio | \\n| -------- | -------- |\\n| 0 | 0.00 |\\n| 0.5 | 0.00 |\\n| 1. | 0.02 |\\n| 1.5 | 0.13 |\\n| 2. | 0.23 |\\n| 2.5 | 0.32 |\\n\\nThese evaluations show that a significant part of wrong classifications (>50%) can be, indeed, attributed to the comic-book class.\", \"this_also_shows_that_the_misclassifications_increase\": \"While the point of style change seems to be rather abrupt for a human, some visual properties continue to shift towards a more simplistic cartoon at later steps. Therefore, more and more images are misclassified.\\n\\n\\n> Do you have any way of asserting that the severity levels of different shifts and different classes are actually calibrated? Since you are training different LoRAs for the different classes, I\\u2019m not sure if this will always be the case, but it might be desirable. (I guess one could calibrate this using the CLIP-distances\\u2026?)\\n\\nWe refer to our answer to your previously raised point about the calibration to address this comment (2nd point in this comment). Calibrating shifts is an exciting challenge for future work. Applying CLIP might be good way to achieve this. However, the CLIP measure is not always reliable for our structure-preserving shifts, as we exemplarily visualize in Fig. 16. \\n\\n> In principle, could you combine different distribution shifts at the same time? E.g., modify the same image to both exhibit fog and snow?\\n\\nYes, our LoRA adapters can be composed, as demonstrated in an example application in the supplementary material (Fig. 29). Exploring the robustness to combined nuisance shifts is indeed an interesting direction for future work.\\n\\nAdditionally, we would like to highlight a broader aspect of our method: it is not limited to nuisance shifts specified via text. Our approach has the potential to handle other types of nuisance shifts, such as a continuous distribution shift from ImageNet to ImageNetv2. However, for this work, we choose to focus specifically on more confined distribution shifts to maintain a clear scope.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Dear reviewer xh13,\\n\\nthank you for recognizing the significance of our contributions and updating your score. Thank you again for your critical and constructive feedback, which helped us more clearly state the relevance of our benchmark.\\n\\nThe Authors\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Dear reviewer LxGf,\\n\\nthank you for recognizing the significance of our contributions and updating your score. Thank you again for your constructive feedback, which helped improve the clarity of our paper.\\n\\nThe Authors\"}",
"{\"title\": \"Thanks for response\", \"comment\": \"I'd like to thank the authors for taking the time to respond. I'm glad to see that the description of the contributions has been sharpened.\\n\\nI'd be willing to increase my score from 5 -> 6 if the authors would be open to adding the following aspects as limitations in the main paper:\\n\\n1. Some nuisance shifts introduce confounds, e.g. \\\"in dust\\\" not only adds dust but also removes half of the people in the image and changes a person's shirt color from red to black. As a consequence, failures cannot always be attributed to the nuisance concept itself. I understand the author's point that this may also not be the case for other datasets but I believe this is an important limitation nonetheless, in particular when it comes to attributing failures to changes in the data.\\n2. Acknowledging that the use of a generative model can lead to a real vs. synthetic distribution shift.\\n\\nTo be clear, I don't think those limitations invalidate the approach - but I think given that they influence the interpretation of results it would be best to acknowledge them in the paper. Of course, whether to incorporate this suggestion is entirely up to the authors.\"}",
"{\"title\": \"Response to reviewer zdn7\", \"comment\": \"> Nuisance shifts affect information that's not related to the nuisance concept. [...] As a consequence, failures cannot be attributed to the nuisance concept itself.\\n\\nOur strategy allows applying shifts that were not possible previously. However, using a generative model for realizing shifts inherently comes along with confounders. This is explained by the biases in the training data of the generative model. So, similarly, real-world OOD datasets also exhibit such confounders and it is equally hard to differentiate whether an accuracy drop can be, e.g., explained by \\n- a style change or a change of shape as in the ImageNet-R dataset (consider Fig. 24 for examples) or \\n- the heavy snow or the lower visual quality in the OOD-CV dataset (as already depicted in Fig. 31).\\n\\nIn contrast to previous works, we particularly reduce the confounders of the classified object by applying LoRA adapters only to later noise steps throughout the diffusion process, which prevents significant changes of spatial structure of the image. Therefore, while future benchmarks similar to our proposed one could benefit from better controllability and removed confounders to acchieve \\\"pure\\\" shifts thanks to the continued progress in generative models, we argue that our approach still achieves valid and confined distribution shifts that capture the real world biases.\\n\\n> The approach is based on generative models, thereby introducing a real vs. synthetic distribution shift that may further influence results. A discussion - better yet: an analysis - of this likely confound is recommended.\\n\\nWe agree that this needs to be considered carefully and fundamentally challenges any generative benchmark. To reduce the bias of the accuracy estimate between the distribution of Stable Diffusion and ImageNet, we used the ImageNet-trained text embeddings to better replicate the ImageNet distribution, which reduces the bias of the accuracy estimate by around 7%. We underline that generative benchmarks do need to address the biases arising from the real vs. synthetic shift. We use a filtering mechanism that is parameterized on a human labeled-dataset to reduce the biases of the accuracy estimates, we performed a user study to check the realism of the benchmarking images, and we compared the distribution shifts of a real-world dataset (OOD-CV) to our work. We discuss our strategies to address the realism of the generated images in the common response (iii).\\n\\nDoes this address your comment or do you propose a different way to analyze confounder?\\n\\n> The paper's main claim to fame remains a bit unclear to me. [...] I recommend that the authors clearly state and justify what they believe is the primary novel contribution (\\\"claim to fame\\\")\\n\\nWe really appreciate this very valuable comment and for re-framing the variety of options. We present our paper's claim to fame, i.e. our main contribution, as enabling the testing of the robustness of vision models with respect to realistic continuous-scale distribution shifts. We refer to the common response (ii) for a more elaborate discussion of the claim of fame.\", \"we_additionally_address_your_other_proposals\": \"While we do not focus on improving the controllability of the generative model, we aim at selecting the best methods for achieving controllability in diffusion models. In general, we would like to support that generative benchmarking can be a complementary way for benchmarking models - not reducing the importance of real-world datasets. The advantage of generative benchmarks lies in the flexibility and scalability of possible shifts, classes, and samples. Additionally, sampling from a generative models better captures the statistics of the real world than a small dataset since it is trained on very lage-scale datasets. Reducing the biases of generative benchmarks through the removal of failure cases of generative models is necessary to advance in the realm of generative benchmarking. Therefore, we propose an improved filtering mechanism in our work and show that failure cases are effectively filtered out.\\n\\n> If it should be a benchmark, people will want to know: who won? What's the score that I need to beat in order to be SOTA? \\n\\nThank you for raising this point. We added Tab. 1 in the main paper and Tab. 2 in the supplementary where we present the mean relative corruption error as introduced for ImageNet-C. This metric allows ranking various models using one single quantity. However, underlining our key finding, we motivate that benchmarking robustness should not only involve averaged quantities.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Dear reviewer 8WvD,\\n\\nwe hope, our rebuttal addressed your concerns. Thank you again for your positive review and for recognizing the significance of our work. \\n\\nThe Authors\"}"
]
} |
1S8ndwxMts | Towards Robust Evaluation of Protein Generative Models: A Systematic Analysis of Metrics | [
"Pavel Strashnov",
"Andrey Shevtsov",
"Viacheslav Meshchaninov",
"Maria Ivanova",
"Fedor Nikolaev",
"Olga Kardymon",
"Dmitry Vetrov"
] | The rapid advancement of protein generative models necessitates robust and principled methods for their evaluation and comparison. As new models of increasing complexity continue to emerge, it is crucial to ensure that the metrics used for assessment are well-understood and reliable. In this work, we conduct a systematic investigation of commonly used metrics for evaluating sequence protein generative models, focusing on quality, diversity, and distributional similarity. We examine the behavior of these metrics under various conditions, including synthetic perturbations and real-world generative models. Our analysis explores different design choices, parameters, and underlying representation models, revealing how these factors influence metric performance. We identify several challenges in applying these metrics, such as sample size dependencies, sensitivity to data distribution shifts, and computational efficiency trade-offs. By testing metrics on both synthetic datasets with controlled properties and outputs from state-of-the-art protein generators, we provide insights into each metric's strengths, limitations, and practical applicability. Based on our findings, we offer a set of practical recommendations for researchers to consider when evaluating protein generative models, aiming to contribute to the development of more robust and meaningful evaluation practices in the field of protein design. | [
"evaluation metrics",
"protein",
"protein generative models"
] | Reject | https://openreview.net/pdf?id=1S8ndwxMts | https://openreview.net/forum?id=1S8ndwxMts | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"g1BtF5vbzb",
"GeBDZASMib",
"FrFPb0orWM",
"Ev6L2LhRGk",
"ErCc3sp88R",
"3w4Bb5WoVG"
],
"note_type": [
"official_review",
"official_review",
"meta_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1729101817933,
1730217038708,
1735294434119,
1730126323582,
1737524292755,
1730060517415
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13975/Reviewer_Rn7f"
],
[
"ICLR.cc/2025/Conference/Submission13975/Reviewer_YRsS"
],
[
"ICLR.cc/2025/Conference/Submission13975/Area_Chair_2sKs"
],
[
"ICLR.cc/2025/Conference/Submission13975/Reviewer_xLmN"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13975/Reviewer_Hg9Y"
]
],
"structured_content_str": [
"{\"summary\": \"This work evaluates various quality, diversity and distributional similarity metrics for their ability to co-vary with random synthetic perturbations on protein amino acid sequences. The authors also evaluate differences in plddt scores for forward folded noised sequences.\\nThe stated aim is to provide a systematic overview of how those metrics change with sequence randomness (noise), number of protein samples and model size to advance the evaluation of protein generative models. However, I think this works falls short of this aim. The proposed metrics are omitting the non-bijective nature of the protein structure-sequence relationship, the authors do not compare with well-established quality metrics in the field of generative protein design (e.g. self-consistency folding as a quality metric for structural fidelity, or edit-distance as a function of distributional similarity). The authors only present results on synthetically perturbed sequences, where residues are mutated with equal probability to assess perplexity and diversity. This is very different from the case of generative modeling, where diverse sequences are generated (non-randomly!) via auto-regressive sampling, any-order sampling or temperature sampling. I would recommend to generate sequences with these models, and synthetically perturbed sequences with BLOSUM or PAN transition matrices.\\nI don't think that machine-learning motivated metrics, such as perplexity, or earth mover's distance are practically useful for the field of generative protein design. Useful metrics should capture if the model generates protein sequences or structures, that fold, are stable, exhibit a specific function.\\nI have several concerns about the methodology, biological soundness and presentation of this work as I will outline concretely below.\", \"soundness\": \"1\", \"presentation\": \"1. Please review citation guidelines, current citation style is reader unfriendly.\\n2. Please mark supplementary figures as such (e.g. Figure 9). \\n3. Figure 8 is missing\", \"contribution\": \"1\", \"strengths\": \"Originality: A systematic analysis of metric variability with protein sequence diversity is a good idea and I would recommend the authors to build on it, but incorporate several improvements: Instead of uniform probabilities for mutations it might be more meaningful to use PAM or BLOSUM matrices. These are the transition probabilities from one amino acid residue to another one (based on similar hydrophobicity, charge, polarity, size etc.).\", \"significance\": \"The authors correctly emphasize that there is no gold-standard in the field of generative protein models on what constitutes a \\\"good\\\" protein. The topic is worth being addressed, although I don't think that this work provides a significant contribution.\", \"weaknesses\": \"Background section:\\n1. The motivation of diversity in the absence of training data is confusing. The authors should discuss that the structure-sequence relationship is no bijective and a one-to-many mapping problem. There are very many sequences that fold into the same or similar structures. The true diversity of this solution space is not known, given the small size of structural data in the PDB. This diversity is likely a complex function of protein size (there are very many diverse sequences that all fold into the same alpha helix peptide), packing (internal residues less diverse, versus external residues etc. \\n2. The authors mention structural stability as a measure of a \\\"good\\\" protein, but do not evaluate this property in this work, this is confusing.\\n3. I find the mathematical notations (especially under \\\"self-consistency\\\" overly complicated (given they are not being used) anywhere else. \\nSection 3, Metrics:\\n1. Fidelity metrics: The fidelity metrics are not addressing structural fidelity in terms of structural similarity (e.g. TM-score or RMSE) in the case of forward folding. Or self-consistency TM in the case of inverse folding. \\n2. In general I would recommend the authors to split metrics for different generative model types and approaches, e.g. sequence-based (e.g. LLMs), inverse folding (structure-to-sequence)\\n3. In section 2.3. the authors state that metrics should be interpretable. I don't find perplexity, or pseudo-perplexity very interpretable. plddt is interpretable. I would recommend adopting metrics like edit distance or structure consistency (e.g. TMscore). I think reporting perplexity in a protein LLM is still valuable, but it's not particularly novel or insightful. I am not sure if self-consistency perplexity: -logp(S|G(F(S)) makes sense given that this protein inverse folding (G) is a one-to-many problem with an unknown and variable number of diverse solutions. And as the authors state -- the folding and inverse folding model bias might further complicate this metric.\\n4. Section 3.2: The diversity defintion of cluster density at 50% and 95% is interesting, but shoudl be compared to more commonly adopted diversity metrics in the field, such as edit distance and pairwise distances.\", \"section_4\": \"Experiments:\\n1. I like the idea of a systematic perturbation of amio acid sequences, but random noise (uniform transition probabilities) is unrealistic. I would recommend using BLOSOM or PAN matrices. Additionally to the synthetic perturbations I am missing an actual application to generative models. I would recommend using different inverse folding models, e.g. ESMIF or ProteinMPNN and generating diverse sequences with random decoding orders and temperature sampling. Currently the authors perturb the sequence in a random way which likely turns them easily into garbage (ie they would never exist in nature and fold). \\n2. The random perturbations do not create meaningful biological diversity in the sequences and simply degrade their quality. As such Figures 2 and 4 are stating obvious trends: The more noise, the worse the quality/fidelity metrics.\", \"questions\": \"Similar to above weaknesses:\\n1. How do those metrics behave for meaningful diverse sequences, that were note generated with random noising?\\n2. Are the randomly noised sequences foldable? Have you tried to calculate the TM-score between the original sequence and the forward folded structure of a 30% noised sequence?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper studies several common evaluation metrics for protein sequence generative models covering quality, diversity and distributional similarity of samples.\\nThe authors present controlled experiments and derive guidelines for a robust assessment of the performance of protein generative models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"### Importance\\n\\nUnifying benchmarking attempts for protein generative models is an extremely important open challenge. \\nStudying various common evaluation metrics systematically and in a controlled setup is impactful because it can inform future developments of new methods and allow researchers to benchmark their models in a more convincing way.\\nThe problem is motivated nicely and grounded in related works.\\n\\n### Breadth\", \"the_paper_addresses_three_dimensions_of_generative_model_evaluation\": \"**quality**, **diversity**, and **distributional similarity**.\", \"it_furthermore_identifies_at_least_two_axes_along_which_evaluation_metrics_should_be_assessed\": \"**robustness vs sensitivity** and **reliability vs computational efficiency**.\\nTogether these cover most practically relevant aspects of model evaluation in this space.\", \"weaknesses\": \"### Clarity\\n\\nThe presented topic is very complex and the authors' attempt to illuminate the design space for these metrics from various angles is commendable.\\nHowever, the clarity of the presentation of their results can be improved. \\nThe paper introduces a lot of metrics and desirable properties thereof but the arguments are sometimes difficult to follow in the current state.\\nIt could be useful to restructure the experimental results section so that each subsection (quality, diversity and distribution similarity) systematically analyses different available metrics regarding their (1) robustness-sensitivity trade-off, and (2) reliability-efficiency trade-off.\\nI would define a clear, quantitative criterion for each and follow an identical structure in each subsection (quality, diversity and distribution similarity).\\nThe current discussion sometimes mixes empirically supported findings with intuition-derived arguments.\\n\\nIn the background section, it is confusing that most of the time the paper discusses three key axes of model performance: quality, diversity and distribution similarity,\\nbut in Section 2.2 it talks about an alternative set of objectives: fidelity, diversity, novelty. \\nSimilarly, the paper introduces \\\"Interpretability\\\" in Section 2.3 but does not discuss this aspect in the Results section.\\nI would recommend to be more consistent throughout the paper (both in terms of wording and semantics).\\n\\nFurthermore, the paper should define the scope of the work clearly. It only covers generative models for amino acids _sequences_ as opposed to backbone _structures_.\\nThe discussion about self-consistency in Section 2.1 seems unnecessarily detailed given the concept is only used once later on (scPerplexity metric). \\nWhen I arrived at this point in the manuscript I was under the impression that the paper discusses both sequence and structure generative models because self-consistency is primarily used in the evaluation of _structure_ design methods (e.g. [1]).\\n\\n\\n\\n### Analysis of diversity metrics\\n\\nThe analysis of diversity metrics (Section 4.3) is extremely short, and it is unclear whether the presented data in Figure 3 provides information about the _sensitivity_ or _robustness_ of the Cluster Density metric.\\nThe absence of a comparison with alternative approaches additionally makes it hard to interpret the results.\\n\\n\\n### Support every claim with empirical data\\n\\nA systematic evaluation of metrics should always provide empirical evidence to back up the presented conclusions. \\nHere, this is missing in some cases. For instance,\\n- Looking at Figure 9 I would argue there are still notable differences between AlphaFold2 and ESMFold. Rather than just assessing their correlation, it would be useful to understand how sensitive and robust each method is to sample quality differences.\\n- The paper states that simple diversity metrics lack discriminative power but it does not discuss any examples in the analysis in Section 4.3.\\n- The paper also mentions intrinsically disordered regions as a potential stumbling block for the pLDDT metric. While this assumption is reasonable, it is still possible that pLDDT has better discriminative power than alternative metrics in those cases, but only empirical data can provide an answer to this question.\\n- Finally, statements about computational efficiency are never quantified. Providing concrete run times would be an important piece of information that allows readers to get an idea about the reliability-efficiency trade-off.\\n\\n\\n\\n### References\\n\\n[1] Yim, Jason, et al. \\\"SE (3) diffusion model with application to protein backbone generation.\\\" arXiv preprint arXiv:2302.02277 (2023).\", \"questions\": [\"Would it be possible to discuss the sensitivity-robustness trade-off more systematically & quantitatively? For instance, does it make sense to interpret the cluster elimination experiment as a strong perturbation (that a sensitive distribution similarity metric should detect) and intra-cluster diversity reduction as a weak perturbation (that distribution similarity metrics should be more or less robust to)?\", \"Why is the CD diversity metric not compared to simpler alternatives like average pairwise distances between generated sequences?\", \"I would like to see some reference data for run times of different metrics to support statements about the reliability-efficiency trade-off.\", \"Section 4.4.3 should also discuss the _Density_ and _IPR Precision_ results.\", \"The conclusion states \\\"We demonstrate that combining quality, diversity, and distributional similarity metrics provides the most robust assessment of generated proteins\\\". As far as I can tell all experiments evaluate metrics in isolation and therefore do not really support this statement. Could you please elaborate a bit more?\", \"Figure 8 is missing.\", \"### Minor comments\", \"line 37: missing/broken reference\", \"line 43: reference seems to be incorrectly formatted\", \"quotation marks should be corrected in some places (e.g. lines 73 and 81)\", \"the norm in the equation in line 89 is not specified, maybe a more general notation for a distance function should be used here\", \"line 103: I am not sure if I agree with the definition of _diversity_ using memorization of the training data. Samples from the training set can still be diverse. Doesn't this definition apply to _novelty_?\", \"In many places, it would be preferable to change the formatting of citations (use `\\\\citep` instead of `\\\\citet`).\", \"line 157: why did the notation change? Before, small letters were used to denote the folding function and inverse folding function, respectively.\", \"line 214: indices are not correctly formatted\", \"Figures 2 - 5: error bars should be defined in the figure legends\", \"Figure 4 should be referenced in the main text.\", \"Section 4.2.1: how many data points were used to calculate the correlation values? Is the raw data shown somewhere?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper examines evaluation metrics for protein generative models, looking at quality, diversity, and distributional similarity criteria. Proteins are essential to life, and AI has shown great promise in biology with the advent of AlphaFold and other state-of-the-art protein models (both at the sequence and structure levels). Several metrics are used in the literature to evaluate these protein models. Analyzing these metrics is therefore important. While the reviewers agree on the importance of this topic, they judged the paper, in its present state, misses several key elements. Indeed, it failed to include some metrics in the evaluation set, it looked only at synthetic datasets and used perturbation schemes for protein sequences such as random noise that might be unrealistic. Reviewers were also unhappy with the lack of clarity of the presentation.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not provide any rebuttal, comment, or response to the reviews. The reviewers raised valid points that were not addressed.\"}",
"{\"summary\": \"The paper analyses several metrics for protein generation on synthetic datasets with controlled properties in order to see their strengths, limitations, and practical applicability. The paper highlights that some metrics are dependant on sample size and that computationally efficient metrics can be just as effective.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper's background section and motivation is extremely strong. The need for reliable evaluation metrics for protein generation is convincing and some of the metrics used in the literature are clearly outlined.\", \"The controlled experiments are well thought out and provide some useful information about the quality of the metrics.\", \"The authors perform a rigorous set of experiments and the provided practical recommendations could be useful to the community.\"], \"weaknesses\": [\"The main weakness comes from evaluating the metrics in such a controlled and synthetic setting. The quality metrics are evaluated on proteins which the models (such as ESMFold) are trained on. In this case, introducing more noise is shown to cause the metrics to get worse. In practice though, we generate unseen proteins and it is not clear whether these metrics generalize to proteins they are not trained on. Additionally, it is not clear from the paper whether these metrics correlate with anything experimentally. Therefore, the evaluation of these metrics in the given scenario doesn\\u2019t seem to offer much practical insight on the usefulness of these metrics.\", \"The authors compare different metrics and explain that there should be a tradeoff with computational efficiency. However, it is not clear how the methods actually differ in this regard. You mention a few times that scPerplexity is expensive to calculate as it involves two models but there is no figure or timing comparison given. How much slower is it and is it impractical? You also say that your proposed metrics allow for rapid evaluation. Again, how long are these proposed metrics taking and what does the term \\u201crapid\\u201d quantitatively mean? Although computational efficiency is mentioned a lot throughout the work, and seems to be important for selecting metrics, I have no indication from the paper on how these methods actually differ in this regard and why I should use a method over another practically. To improve this, the authors could include a table or figure comparing the runtime of each metric on a standardized dataset, perhaps across different sample sizes.\"], \"questions\": [\"In line 307 you say that ScPerplexity has the highest sensitivity to sample size. Firstly, this isn\\u2019t clear from the plots as it doesn\\u2019t seem to change any more than plDDT. Additionally, why does sample size matter if the ordering with respect to noise is always correct? In practice, we can fix the sample size and correctly rank different generative models.\", \"You say one of the important aspects of a metric is its interpretability but this isn\\u2019t considered later when evaluating the metrics. Are these metrics interpretable and are there differences in interpretability between them?\", \"You say that a good generated protein should be structurally stable. Are any of the metrics actually capturing this?\", \"minor comments\", \"Line 37 missing reference\", \"Quotation marks are always the wrong way up when used. For example, line 46.\", \"Some references are missing their journal. For example, \\u201cGenerating novel, designable, and diverse protein structures\", \"by equivariantly diffusing oriented residue clouds\\u201d was at ICML 2023.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The present work attempts to provide insight into various metrics of generative models over protein sequences. They evaluate several metrics used in prior works such as predicted local distance difference test, perplexity, pseudo perplexity, self-consistency perplexity, cluster density, and multiple techniques for distributional similarity metrics. On a curated dataset, they measure robustness to random perturbations, sensitivity to sample size, and use of different protein language models to compute the metrics. Some recommendations are provided at the end of which models to use and sample size for robust evaluation.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"Evaluation of robustness to several protein sequence generation metrics.\"], \"weaknesses\": [\"The present work contains no technical novelty or new results. Therefore the analysis and presentation needs to be of high quality. Unfortunately the presentation quality is low and the insights are novel enough for acceptance to ICLR.\", \"First, the work claims to evaluate protein generative models but proceeds to ignore or miss protein structure generative models such as RFdiffusion [1], Chroma [2]. The work only attempts to evaluate protein sequences without consideration for generated structures. Considering the popularity and success of [1, 2], this is a major omission.\", \"There are **no benchmarks** of generative models in this work. The experiments are conducted on artificial perturbations of known sequences and on a curated set of sequences from 5 protein families. The insights in this work cannot be believed and are of little use unless the metrics are rigorously evaluated on state-of-the-art protein generative models.\", \"Metrics are only useful if they correspond to success in downstream applications. The metrics used in [1, 2] are accepted because they are known to correlate (albeit weakly) with experimental success [3]. None of the metrics utilized in this work are associated with success in downstream applications. Indeed we care about how well the samples capture distributions but they are auxiliary metrics and are not the primary metrics in high impact protein generative model publications.\", \"The noise perturbations are artificial. How do we know if randomly mutating 5-30% of the sequence is a failure mode or common occurrence in existing protein generative models?\", \"Novelty is mentioned as a important consideration but no novelty metrics are presented or discussed.\", \"Only using 5 protein families is far too small of an evaluation set. Line 234 states the experiments are done on \\\"real-world generated data\\\" but what is actually being generated here?\", \"Section 4.3 on diversity metric analysis is weak. The trend in Figure 3 is the expected behavior of the 50% and 95% sequence similarity threshold. There is no new insight here.\", \"I'm not sure what new insight is provided from the noise. Figures 2 and 3 show more noise leads to all the metrics becoming worse. This is expected but there is no indication of how this transfers to commonly used protein generative models. Do protein generative models exhibit such behavior?\", \"Section 4.4 is also weak on insights. The graphs are expected by changing the noise and RBG kernel width. It would seem to me that different downstream applications would call for different parameters and robustness. Instead, the claims here are too general and unclear how useful they are for specific downstream applications such as binder design.\", \"I would have liked to see a ranking of protein sequence generative models such as ESM2, ProGen, T5 with the metrics provided.\", \"Overall I do not believe this work provides a careful and rigorous study of evaluating protein generative models. I recommend the authors to rethink the experiments and hypotheses they wish to test.\", \"[1] https://www.nature.com/articles/s41586-023-06415-8\", \"[2] https://www.nature.com/articles/s41586-023-06728-8\", \"[3] https://www.nature.com/articles/s41467-023-38328-5\"], \"questions\": [\"Many of my questions are embedded in the weaknesses. Some more minor questions.\", \"Line 37. What is the missing reference \\\"?\\\"\", \"Line 88. Self-consistency is mentioned but this equation is never used. Why is this given and where is it actually used?\", \"Line 158. The equation $-\\\\log p(S|G(F(S))$ is confusing. If $G(F(S))$ is the inverse folding prediction then what does it mean to conditioned $p(S| \\\\cdot)$ on this?\", \"Line 230. What protein generation tasks are considered?\", \"Line 433. How are \\\"state-of-the-art protein generative models\\\" re-trained?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
1S7kpbfgq9 | Normalized Space Alignment: A Versatile Metric for Representation Analysis | [
"Danish Ebadulla",
"Aditya Gulati",
"Ambuj Singh"
] | We introduce a manifold analysis technique for neural network representations. Normalized Space Alignment (NSA) compares pairwise distances between two point clouds derived from the same source and having the same size, while potentially possessing differing dimensionalities. NSA can act as both an analytical tool and a differentiable loss function, providing a robust means of comparing and aligning representations across different layers and models. It satisfies the criteria necessary for both a similarity metric and a neural network loss function. We showcase NSA's versatility by illustrating its utility as a representation space analysis metric, a structure-preserving loss function, and a robustness analysis tool. NSA is not only computationally efficient but it can also approximate the global structural discrepancy during mini-batching, facilitating its use in a wide variety of neural network training paradigms. | [
"Deep Learning",
"Representation Learning",
"Local Intrinsic Dimensionality",
"Similarity Metric",
"Dimensionality Reduction",
"Interpretability"
] | Reject | https://openreview.net/pdf?id=1S7kpbfgq9 | https://openreview.net/forum?id=1S7kpbfgq9 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yZ89D6NrCB",
"wiTMah2BdY",
"vijkQqtPw9",
"ukqEc5c8AS",
"uJSTWbx4Yn",
"pYeqtp3gGp",
"nzmKkI6CnP",
"ljGlDwOEfn",
"lIS68faSRz",
"lADvDaKCQb",
"jxop2UPJsH",
"ioS1QMwPbs",
"g3Z7J4K3QV",
"fryeByeDwY",
"e8FThDWDfT",
"ci0XfrMIh4",
"cDdOlFcspH",
"T6tpu0Sylw",
"Rktjz0gt3Y",
"Qsq8PEwu18",
"N5X6TyXhRk",
"McSEQJqfpj",
"MNeLUQpBtC",
"LjgxRz5EbQ",
"LeHXSnjgYP",
"IoOhmPDW5G",
"Il2yf6aycA",
"GtOPJuG1c6",
"GL1abvtjtU",
"GG1HqVY73f",
"BKcSqXF9pZ",
"7D3xkEHWHv",
"6kXZpTKFZ7",
"0u4Y3h0440"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732537438895,
1732604332504,
1732669175406,
1729844437402,
1732619577068,
1732524274020,
1732184357370,
1732183210828,
1732669625748,
1730756175503,
1732184215411,
1732183605819,
1732182757768,
1732184134716,
1732182159294,
1730763363439,
1732669089038,
1734523338422,
1732620404291,
1732604453339,
1732183330208,
1737524222450,
1732604505732,
1732184004288,
1732183537519,
1732183760500,
1732537323134,
1732182475807,
1730745722264,
1732652726615,
1732183844240,
1732182635136,
1732524037636,
1732601943805
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_rMH7"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_1hcq"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_rMH7"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_1hcq"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_rENM"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Area_Chair_5A2V"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_1hcq"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_YeRH"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_1hcq"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12902/Reviewer_rMH7"
],
[
"ICLR.cc/2025/Conference/Submission12902/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"**3. Readability of figures**\\n\\nThank you for your feedback regarding the font sizes in the figures. We acknowledge this concern and are actively working on improving the figures for better readability. Once we are aware of all the necessary changes for the revised version, we will update the main body of the manuscript to reflect these improvements.\\n\\nAs a temporary measure, we have included larger versions of all the figures from the main text in Appendix X of the latest revision to the manuscript. We hope this helps address your concerns in the interim.\\n\\n**4. Additional metrics for Section 4.1**\\n\\nThank you for your suggestion. While we understand the motivation behind applying sensitivity and specificity tests to Generative model evaluation metrics, we believe these metrics are not well-suited for such analyses. This is because these methods produce multiple evaluation metrics (e.g., precision, recall, authenticity, consistency, quality etc) rather than a single global measure of structural similarity, which makes them less directly comparable to NSA.\\n\\nHowever, we are currently working on adapting the sensitivity and specificity tests from Section 4.1 to Delaunay Component Analysis (DCA)[1]. DCA is the most recent of these metrics and is designed to address some of the limitations in earlier methods like IPR[2] and Geometry Score[3]. As highlighted in Section 2 of the DCA paper, both IPR and Geometry Score suffer from inaccuracies and limitations, which informed our decision to prioritize DCA for this analysis.\\n\\nWe are unsure if these experiments will be completed within the rebuttal period but will update the manuscript with the results as soon as possible. \\n\\nWe hope our response clarifies the queries of the reviewer. Please feel free to ask any additional questions you might have.\\n\\n[1] Poklukar et al, Delaunay Component Analysis, ICLR 2022\\n\\n[2] Kynk\\u00e4\\u00e4nniemi et al, Improved precision and recall metric for assessing generative models, in Neurips 2019\\n\\n[3] Khrulkov et al, Geometry score: A method for comparing generative adversarial networks, ICML 2018\"}",
"{\"title\": \"Additional experiments to clarify the formulation of LNSA\", \"comment\": \"Dear Reviewer,\\n\\nIn the latest revision of the manuscript we also provide empirical evidence in Appendix Y to support our design choice of not taking another inverse as recommended by Mackay and Ghahramani [1]. We show that taking another inverse not only leads to absurdly large LNSA values but also results in erratic behavior. We hope this satisfies the reviewer\\u2019s concerns regarding the choice to not invert the LID values again before computing LNSA.\\n\\nAdditionally as the revision period will be ending soon, we would be extremely grateful if you could take the time to review our rebuttal and let us know if it has resolved all of your concerns. If you have any further questions, we would be happy to answer them.\\n\\nSincerely,\\n\\nThe Authors\\n\\n\\n[1] David J.C. MacKay and Zoubin Ghahramani. Comments on \\u2019maximum likelihood estimation of intrinsic dimension\\u2019 by e. levina and p. bickel (2004), 2005.\"}",
"{\"title\": \"Clarifications on manuscript structure\", \"comment\": \"The main text of our manuscript is structured following the standard framework for introducing and validating structural similarity metrics. In Section 3, we introduce NSA and provide its formal definition. Section 4. proves its validity through specificity and sensitivity tests, while Sections 4.2 to 4.4 demonstrate its application across multiple use cases.\\n\\nThe appendices present theoretical guarantees, detailed ablation studies, additional experimental results, and practical implementation details. If the reviewer believes that specific content from the appendices should be included in the main manuscript or if additional results are necessary to address their concerns, we welcome such suggestions and are happy to restructure accordingly.\"}",
"{\"summary\": \"The authors introduce the Normalized Space Alignment (NSA) method for comparing two point clouds, which is based on comparing pairwise distances. The NSA consists of the local NSA, defined through the Local Intrinsic Dimensionality, and the global NSA, defined through Representational Similarity Matrices. The final NSA is defined as the weighted sum of global and local NSA. The experimental section includes experiments where NSA is used to analyze representations, as a loss in AE and for detection of adversarial attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Good background section on LID.\", \"Good applicability of the method.\"], \"weaknesses\": [\"General concerns:\", \"Despite a wide range of applications presented in the experimental section, the paper lacks comparison to relevant existing methods to really showcase the efficiency. For example, in the link prediction and adversarial attacks experiments, the method should be compared to the relevant baselines from the respective fields to be able to fairly judge the efficiency of the method.\", \"Datasets used in the experiments are small and basic, and the generalization of the method is questionable. How does the method behave for large sets and more complicated cases?\", \"No ablation studies are provided. For example, the method relies on the k nearest neighbors selection and I believe that the choice of k does influence the results. No experiments are provided on the robustness of k, neither is mentioned what k is actually used in the experiments. There is also no info on the balancing parameters l and g, and no ablation studies on the influence of these.\", \"The definition of GNSA depends on the choice of the origin. For example, given two point clouds X and Y, the translated point clouds will have the same structure but not the same GSNA score which is problematic. Of course one could resolve this with selecting a different origin but that is not feasible in practice.\", \"Figures are not well readable.\"], \"questions\": [\"Specific comments:\", \"Doesn\\u2019t the computation of GNSA depend on the specific order of the point clouds? For example, comparing a_i and b_i only make sense if these below to the same datapoint, otherwise you\\u2019re comparing random elements.\", \"In Sec 4.1 you claim that \\u201ca good structural similarity index should show high similarity between architecturally identical neural networks with different weight initializations\\u201d. However, different initializations produce different models and there is no reason to assume that these should have the same structures. Also, in Figure 1 all the plots on the left are exactly the same. If this is not a typo, then I also don\\u2019t believe that the experiment shows what it is claimed. Additionally, the results here should be compared to the classical methods for comparing representations like Alaa et al, How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models, Kynk\\u00e4\\u00e4nniemi et al, Improved precision and recall metric for assessing generative models, NeurIPS 2019, Poklukar et al, Delaunay Component Analysis, ICLR 2022, Khrulkov et al, Geometry score: A method for comparing generative adversarial networks, ICML 2018, etc, which are also missing from the related work.\", \"Please add details in 4.2.1. on how GSNA is even calculated. What is X and what is Y?\", \"In Sec 4.3., I do not understand why an AE is used on top of the produced embeddings. In my view, a baseline should be the classification accuracy on the embeddings of the GCN or alternatively of a NSA-GCN trained model but not of a frozen GCN model with an AE attached to it. Also, as mentioned above, this experiment lacks comparison to any SOTA graph based methods which makes the applicability questionable.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I thank the authors for their responses.\\n\\n\\\"Appendix R compares NSA-AE\\u2019s performance on the swiss roll dataset when it aims to minimize euclidean distances vs geodesic distances.\\\"\\n\\nHow were geodesic distances computed/estimated? I saw no information about this in the manuscript.\"}",
"{\"comment\": \"9. I understand that these methods are not Structural Similarity Metrics but in the experiment in 4.1. you are directly analysing the representation space, thus I do not see a reason why these methods couldn't be used. It would be a great way to highlight the strengths of your method.\\n\\n10. Thank you for clarification and adding those details.\"}",
"{\"comment\": \"**11.Clarification on Section 4.3**\\n\\nThank you for raising this concern. As clarified in **Global Response 2**, the primary purpose of the link prediction experiment in Section 4.3 is to evaluate NSA\\u2019s ability to preserve structural integrity during dimensionality reduction, not to benchmark NSA as a state-of-the-art (SOTA) method for link prediction.\\n\\n1. Why Use an AE: The autoencoder (AE) is used to test NSA\\u2019s effectiveness as a structure-preserving loss function during dimensionality reduction. It enables us to reduce the dimensionality of the embeddings produced by a frozen GCN model while preserving relative distances and structural integrity. The AE is not intended as a competitive link prediction framework but rather as a mechanism to facilitate the use of NSALoss.\\n\\n2. Baselines: While a baseline of classification accuracy or link prediction using the raw GCN embeddings could be included, this would not serve the purpose of this experiment, which is to demonstrate how NSA ensures that reduced-dimensional embeddings retain their structural integrity. For reference, the revised manuscript's Table 2 includes the performance of a base GCN model directly trained on link prediction to provide context. However, this is not presented as a baseline that NSA competes against, as NSA\\u2019s goal in this experiment is not to improve link prediction but to demonstrate structure preservation during dimensionality reduction.\\n\\n3. Comparison to SOTA Methods: Comparing NSA to SOTA graph-based methods would be outside the scope of this experiment, as NSA is not a graph-specific model but a general structural similarity metric. As discussed in **Global Response 2**, NSA can serve as a supplementary loss function or metric for a variety of tasks, but it is not designed to replace task-specific SOTA methods.\\n\\nWe hope this clarifies the intent of Section 4.3 and its focus on dimensionality reduction and structural preservation. For a more detailed discussion, we encourage you to review Global Response 2. \\n\\nWe hope we have clarified all the queries the reviewer had. Please let us know if you have any more concerns.\\n\\n\\n\\n[1] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural\\nnetwork representations revisited. IN ICML 2019\\n\\n[2] Serguei Barannikov, Ilya Trofimov, Nikita Balabin, and Evgeny Burnaev. Representation topology\", \"divergence\": \"A method for comparing neural network representations. In ICML 2022\\n\\n[3] Enric Boix-Adsera, Hannah Lawrence, George Stepaniants, and Philippe Rigollet. 2022. GULP: a prediction based metric between representations. In NeurIPS.\\n\\n[4] Zuohui Chen, Yao Lu, Wen Yang, Qi Xuan, and Xiaoniu Yang. 2021. Graph-Based Similarity of Neural Network Representations. ArXiv preprint (2021)\\n\\n[5] Trofimov, I., Cherniavskii, D., Tulchinskii, E., Balabin, N.,Burnaev, E., and Barannikov, S. Learning topology preserving data representations. In The Eleventh International Conference on Learning Representations, 2023.\\n\\n[6] Klabunde, M., Schumacher, T., Strohmaier, M., and Lemmerich, F. Similarity of neural network models: A survey of functional and representational measures.\\n\\n\\n[7] Alaa et al, How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models, \\n\\n[8] Kynk\\u00e4\\u00e4nniemi et al, Improved precision and recall metric for assessing generative models, \\n\\n[9] Poklukar et al, Delaunay Component Analysis, ICLR 2022, \\n\\n[10] Khrulkov et al, Geometry score: A method for comparing generative adversarial networks, ICML 2018\\n\\n[11] Ali Burji, Pros and Cons of GAN Evaluation Measures: New Developments\\n\\n[12] Serguei Barannikov et al. Manifold Topology Divergence: a Framework for Comparing Data Manifolds Neurips 2021\"}",
"{\"comment\": \"We thank the reviewer for taking the time to evaluate our work in detail and for their comprehensive review. We have added several experiments to the revised manuscript as responses to the raised concerns and present our responses to the queries below:\\n\\n**1. Figures on the left in Figure 1 are wrong.**\\n\\nWe apologize for the error. We have fixed this in the revised manuscript\\n\\n**2. Specificity assumptions, especially in ResNets**\\n\\nThank you for the constructive feedback. Numerous studies have shown that similar layers across neural networks trained on the same data, differing only in initial weights, often exhibit high structural similarity upon convergence. This has been observed across various architectures, including ConvNeXt, VGG, ResNets, and Transformers, as demonstrated by prior structural similarity metrics. We present a more detailed response to this in **Global Response 3**. We also provide additional results in Appendix V showing consistent layerwise similarity across multiple GNN architectures and datasets, further supporting this phenomenon across diverse network types. For additional evidence across architectures and datasets, we refer to [1,2,3], which provide a broader set of results on various architectures. Since NSA performed on par or better than these metrics in our tests, we expect it to extend effectively to other architectures not explicitly covered in this paper.\\n\\n**Explanation of ResNet Analysis and Layer Selection:** Our ResNet similarity heatmaps take outputs from the end of each residual block rather than from intermediate layers within blocks. This choice aligns with findings from Kornblith et al.[1] with CKA, where ResNet layerwise comparisons show distinct patterns only when examining post-residual outputs. When every layer (including intermediate layers) is compared, as in Figure 4 of [1], the similarity matrix forms a grid-like pattern due to mismatches within block layers. By focusing on outputs at the end of each block (8 blocks in ResNet-18, plus the final fully connected layer), we observe a clearer similarity pattern across instances of the same architecture. This approach also aligns with findings by Veit et al. (2016)[4], which demonstrated that in ResNets, most significant gradient flow occurs along the short paths (skip connections and post-residual outputs), as longer paths contribute minimal gradients.\", \"experimental_setup\": \"For thoroughness, we compute average similarity over 10 trials for each metric when using it in subsets. For Figure 1, we average the performance across 3 separate runs (6 models trained with different seeds forming 3 pairs of comparisons). The standard deviation of most metrics on subset computation is a few orders of magnitude below the mean value and does not affect the results, so we do not include it in Figure 1. For reference we provide the standard deviation on both RTD and NSA in Table 1 and provide standard deviation of NSA's individual components in Appendix Q. Additionally we also present layerwise specificity tests in Appendix U where we show the standard deviation of the metrics across runs for Figure 1's specificity tests. We also report layerwise similarity results of ResNet-18 and ResNet-34 trained on ImageNet in Appendix I for further validation on image datasets. We hope these additional experiments will alleviate the reviewer's concerns.\\n\\n**3. Query on Equation 6**\\n\\nThank you for your question. While LocalNSA is inspired by the Local Intrinsic Dimensionality (LID) measure proposed by MacKay and Ghahramani, we intentionally deviate from their formulation for the following reasons:\\n\\n- LocalNSA's Objective: While LID computes the dimensionality of the entire space, LocalNSA does not aim to measure the LID itself. Instead, it is designed to quantify pointwise discrepancies between two spaces, ensuring that its theoretical properties remain valid.\\n- Pointwise Discrepancy Requirement: LocalNSA needs to operate at the pointwise level to preserve its role as a structural similarity metric. This requirement distinguishes it from global estimators like the one in MacKay and Ghahramani's formulation.\\n- Normalization and Range Stability: The use of the inverse in LocalNSA is a deliberate design choice. This ensures that:\\n(a) LocalNSA values remain consistent across different mini-batch sizes, maintaining stability during training.\\n(b) The values stay within a reasonable numerical range, avoiding issues such as skewed loss functions. As demonstrated in Figure 3b of Mackay and Ghahramani\\u2019s note, directly averaging the inverses (as in their estimator) results in a much wider range of values, which could lead to instability when used as a loss function.\\n- Theoretical Properties: The choice to use the inverse does not affect LocalNSA's theoretical properties. For instance, the convergence of x\\u2212y is equivalent to the convergence of 1/x - 1/y. Thus, the design choice to not inverse again is primarily for practical and numerical convenience.\"}",
"{\"title\": \"Clarification on novelty\", \"comment\": [\"We respectfully disagree with the statement that NSA does not convincingly demonstrate a significant advance as a structural representation metric. Besides outperforming previous works in our application benchmarks, NSA offers the following advancements over previous state-of-the-art (SoTA) methods:\", \"**Higher Sensitivity and Specificity:** NSA demonstrates superior sensitivity to high-variance components and specificity to structural changes across layers of neural networks compared to methods like RTD and CKA. These properties are validated in Section 4.1 and Appendix U.\", \"**Computational Efficiency:** NSA is significantly more computationally efficient than its best competitor, RTD\", \"**Differentiability and Continuity:** NSA is fully differentiable and continuous, enabling its use as a loss function in optimization pipelines. This is a critical limitation of prior metrics, which are often discrete and non-differentiable.\", \"**Global Approximation in Mini-Batching:** As far as we are aware, NSA is the first structural similarity metric that is differentiable and capable of approximating its global value when applied in mini-batches. Previous metrics, even if differentiable, lack this property, making NSA uniquely suited for scalable applications\", \"**Local and Global Structure Preservation:** NSA provides both global and local focus on structure preservation unlike previous works who only look at one of the two perspectives\", \"**Explainability:** NSA\\u2019s reliance on distance-based measures makes it inherently explainable, as discrepancies can be directly tied to measurable distances between points in the representation space.\", \"**Pointwise Discrepancy Analysis:** Unlike prior metrics, NSA provides pointwise granularity, enabling the identification of specific sources of structural discrepancies in the representation space. Topologically focused works like RTD look at topological features instead of points and thus do not have the ability to perform pointwise analysis\", \"**Flexibility:** NSA\\u2019s formulation can be modified to suit alternative distance measures without fundamentally altering the metric\", \"We would be grateful if the reviewer could clarify which **important concerns** remain unaddressed and why they believe NSA does not represent a significant advancement over existing metrics. This would allow us to address their concerns more directly and improve the clarity and impact of our manuscript.\"]}",
"{\"summary\": \"The paper introduces Normalized Space Alignment (NSA), a novel metric to analyze neural network representations. NSA compares two point clouds (representing data structures within neural networks) by preserving both global and local structures, regardless of differing dimensionalities. It can be used to preserve representation structures in tasks such as suitable for diverse tasks such as dimensionality reduction, adversarial robustness assessment, and cross-layer representation analysis. NSA\\u2019s main advantage is its ability to efficiently preserve global and local structure across different dimensional spaces. The authors showcase NSA\\u2019s versatility by applying it across various tasks, demonstrating its computational efficiency and robustness, particularly in mini-batch processing.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"NSA introduces a new approach for representation alignment with applications in dimensionality reduction, structure-preserving autoencoders, and robustness analysis, highlighting its adaptability to multiple tasks.\", \"Its quadratic computational complexity improves on the cubic complexity of alternative metrics like RTD, making it suitable for large datasets and mini-batch processing in training.\", \"NSA is evaluated across multiple tasks and datasets, and compared with established metrics (CKA and RTD).\"], \"weaknesses\": \"__Figure 1__: all 3 plots in the left column are the same.\\n\\n__Specificity assumptions:__ In section 4.1.1, the authors expect that the same layers of two networks trained on the same data and differing only in the initial weights should have high structural similarity. However, the actual layer in which similar features are learned may vary, particularly in ResNets (due to their residual connections). This is a well-known phenomenon: residual connections allow networks to adapt flexibly, enabling the model to skip certain layers or distribute features across them depending on initial weights and learning dynamics. See:\\n\\n[1] Veit, A., Wilber, M. J., & Belongie, S. (2016). Residual networks behave like ensembles of relatively shallow networks. Advances in neural information processing systems, 29.\\n\\nThus, instead of showing a single example result in Figure 1, the authors would make a stronger case if they (i) reported the average across multiple instances of the same networks; and (ii) used multiple architectures and datasets.\\n\\n__Equation 6__: for the units of LNSA to make sense, you should take the inverse again, after computing the mean of the inverses. That's what MacKay and Ghahramani actually do -- notice the -1 power in the formulas for their estimators ($\\\\hat{m}^{-1}$). You can also check this on the source code they provided. In their Fig. 2, the best curves are: \\\"the inverse of the average of the inverse m-hats (orange), and our preferred maximum likelihood estimator (which is just equation (7) again.\\\"\\n\\nHaving said that, I don't think you should compute the individual residuals using the Lid inverses. The residuals should keep their units of \\\"dimension\\\". How do the authors justify this?\\n\\n__GNSA:__ I see a problem with this dissimilarity in that it can produce large values if the geometry of the manifold changes but the topology stays the same. A classic example where this would happen is for the \\\"swiss roll\\\" dataset (https://scikit-learn.org/1.5/auto_examples/manifold/plot_swissroll.html): the GNSA value comparing the original roll and its unrolled counterpart would be very large since, although the first several nearest neighbors of a point $i$ would not change their distances much, points that are far away (following along the spiral) would become considerably farther after flattening the roll. I believe this would lead to large GNSA even though the two manifolds are topologically identical. Have the authors considered this? If they agree, I suggest a more thorough discussion on strengths and weaknesses of GNSA.\\n\\n__Lack of ground truth__: I believe this study would greatly benefit from using toy datasets that provide some ground truth to verify the efficacy of the method proposed. E.g., Gaussian clusters of various dimensionalities, the 1-D spiral, the 2-D S-curve, a plane with a hole; these have been classically used in the manifold learning literature. Here are a couple examples of recent papers that use interesting toy datasets as ground truth for comparing low-dimensional embeddings and dimensionality:\\n\\n[2] Wang, Yingfan, et al. (2021) \\\"Understanding how dimension reduction tools work: an empirical approach to deciphering t-SNE, UMAP, TriMAP, and PaCMAP for data visualization.\\\" Journal of Machine Learning Research 22.201: 1-73.\\n\\n[3] Dyballa, L., & Zucker, S. W. (2023). IAN: Iterated Adaptive Neighborhoods for manifold learning and dimensionality estimation. Neural Computation, 35(3), 453-524.\\n\\nIt would be informative to have some simple, intuitive examples that could be directly visualized in 2 or 3 dimensions. Such datasets could be perturbed in ways that _did_ change their topology and structural relationships vs. others that _did not_, the goal being to check whether the values produced by LNSA and GNSA would reflect the truth.\", \"questions\": \"__Figure 1:__ assuming the values being plotted are means, how many tests were performed? What are their standard deviations? This is especially important since the data is being subsampled for the computations, and training seeds will produce different networks.\\n\\n__Figure 3:__ same questions here with regards to whether the curves represent means. Stating how many repetitions and the standard deviations is important to understand the significance of these curves.\\n\\n__Lines 324--328__: I had trouble understanding the sensitivity test. Although the notion of testing robustness to the removal of principal components makes perfect sense to me, it was not clear how the plots in Fig. 1 demonstrated, e.g., that \\\"NSA is more sensitive to the removal of high variance PCs compared to RTD and CKA\\\". Moreover, I'm not sure how to interpret the values for \\\"detection threshold\\\", especially since the values in the main text are different than those in the figure. What are the \\\"baselines\\\" mentioned in the plots' legends?\\n\\n__Line 435:__ \\\"the latent embeddings are then tested on their ability to predict the existence of links between nodes\\\". How exactly are they tested on this? Are they used as inputs in another GCN? This wasn't clear to me.\\n\\nIn lines 197, 199, 272, surely the authors mean dissimilarity, not similarity (since they compute distances)? There are more instances throughout the paper where these metrics are called \\\"similarities\\\".\\n\\n__Line 497:__ \\\"We used a GCN along with four __robust__ GNN variants...\\\". Why robust? Robust to what exactly?\\n\\n__Line 500:__ \\\"by introducing perturbations ranging from 5% to 25%\\\". These percentages are w.r.t. what exactly? And what is the nature of these perturbations? Removing/changing links, nodes, or both?\\n\\n__Minor points:__\\n\\n- I found no pointer to Figure 3 in the main text.\\n\\n- Line 493: \\\"we applied NSA in the context of GNNs, but the method __can__ be equally effective in analyzing the robustness of other architectures\\\". I recommend changing __can__ to \\\"might\\\", or \\\"could\\\", unless the authors have actually tested this empirically.\\n\\n- Line 503: I recommend saying \\\"the __original__ graph\\\" instead of \\\"the _clean_ graph\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**9. The results here should be compared to the additional works referenced:**\\n\\nThank you for suggesting these works.\\n\\nWe reviewed [7, 8, 9] and decided they fall outside the scope of our current manuscript. While they provide valuable evaluation methods for generative models, they primarily focus on assessing the quality of a generated space relative to a ground truth space using metrics like precision, recall, and authenticity. These approaches differ fundamentally from NSA for the following reasons:\\n1. Not Structural Similarity Metrics: These methods evaluate the fidelity and diversity of generative outputs rather than measuring structural similarity between two spaces. As such, they do not fall strictly into the category of structural similarity indices (e.g., CKA[1], RTD[2], GBS[4], MTD[12], GULP[4]).\\n2. Non-Differentiability: The metrics rely on non-differentiable operations, such as graph-based manifold approximations and discrete counting, making them unsuitable for use as loss functions in gradient-based optimization pipelines. In contrast, NSA is fully differentiable and can be integrated seamlessly into training objectives.\\n3. Dependency on Ground Truth: All three methods rely on a ground truth representation space to compute precision, recall, and authenticity metrics. NSA, by design, measures structural similarity between two representation spaces, irrespective of whether one is \\\"generated\\\" or \\\"ground truth,\\\" providing a symmetric quantification of discrepancies between the two spaces.\\n\\n\\n\\nGeometry Score [10] is a topologically driven measure to compute differences between representation spaces, but it has several shortcomings [11,12].\\n- Lack of Sensitivity: GScore fails to capture meaningful differences between distributions under simple transformations like shifts, scaling, or reflections. It also struggles to detect critical issues such as mode dropping or mode invention.\\n- Stochasticity: Its reliance on approximate barcodes introduces significant variability, requiring thousands of repetitions for reliable results, which is computationally prohibitive.\\n- Scalability: GScore is inefficient and impractical for high-dimensional datasets or modern large-scale applications.\\n\\nThese shortcomings were addressed by Manifold Topology Divergence (MTD)[12], which improved upon GScore by providing a more robust and scalable topological analysis. Furthermore, Representation Topology Divergence (RTD)[2]\\u2014a direct successor to MTD from the same authors\\u2014refined and extended these improvements, making it the most robust and comprehensive method in this lineage.\\n\\nSince NSA is already compared extensively to RTD in our work, a comparison with GScore would be redundant and less informative. We focus on RTD because it not only improves upon GScore and MTD but also serves as a state-of-the-art method for structural analysis.\\n\\nWe understand the importance of a comprehensive review of related work. However, we also believe it is critical to maintain a focused scope to ensure clarity and coherence in presenting our contributions. Including an extensive comparison with every evaluation metric that aims to enhance representation learning, especially those with a narrow focus on generative modeling, might divert attention from the novelty and broad applicability of NSA.\\n\\n\\n**10. Details in 4.2.1. on how GSNA is calculated.**\\n\\nIn this experiment, we extract output embeddings from two neural networks with identical architectures but different random initializations. These embeddings represent the model outputs for a given dataset. To evaluate the convergence of NSA, we compute GNSA on randomly sampled subsets of these embeddings and compare the subset-derived values to the global GNSA value computed on the entire dataset.\\n\\nWe repeat this process multiple times for different subsets (of size 200 and 500) and observe how the subset-based GNSA approximates the global value as the number of trials increases. The results demonstrate that GNSA reliably converges to the global value with sufficient sampling, highlighting its robustness in mini-batch settings. In contrast, RTD fails to converge effectively, emphasizing the advantage of NSA in capturing global information of large datasets, even when working with subsets of data. We have added some clarification to the experimental setup in Section 4.2.1 of the revised manuscript.\"}",
"{\"comment\": \"**13. Applying NSA on CNN adversarial analysis**\\n\\nThank you for your feedback. NSA looks at the original and perturbed representation space when a model has been attacked. Regardless of the architecture, if an adversarial attack attempts to modify the model by changing how it classifies certain data points, then the output representation space of the perturbed model will change and NSA will be able to detect this. We have added experiments in Appendix I where demonstrate that NSA shows a strong correlation with misclassification rate on a CNN too, using a ResNet trained on CIFAR-10. We also present experiments showing NSA's ability to perform pointwise analysis to identify the source of perturbations with various ResNet architectures.\\n\\n**14. Original graph instead of clean graph**\\n\\nThank you for your feedback. We have made this change.\\n\\nWe hope we have addressed all the queries raised by the reviewer. Please let us know if you have any additional concerns.\\n\\n\\n\\n\\n[1] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural\\nnetwork representations revisited. IN ICML 2019\\n\\n[2] Serguei Barannikov, Ilya Trofimov, Nikita Balabin, and Evgeny Burnaev. Representation topology\", \"divergence\": \"A method for comparing neural network representations. In ICML 2022\\n\\n[3] Zuohui Chen, Yao Lu, Wen Yang, Qi Xuan, and Xiaoniu Yang. 2021. Graph-Based Similarity of Neural Network Representations. ArXiv preprint (2021)\\n\\n[4] Veit, A., Wilber, M. J., & Belongie, S. (2016). Residual networks behave like ensembles of relatively shallow networks. Advances in neural information processing systems, 29.\\n\\n[5] Wang, Yingfan, et al. (2021) \\\"Understanding how dimension reduction tools work: an empirical approach to deciphering t-SNE, UMAP, TriMAP, and PaCMAP for data visualization.\\\" Journal of Machine Learning Research 22.201: 1-73.\\n\\n[6] Ding.,et al. Grounding representation similarity through statistical testing. Advances in Neural Information Processing Systems, 2021\\n\\n[7] Felix Mujkanovic, Simon Geisler, Stephan G\\u00fcnnemann, and Aleksandar Bojchevski. Are defenses for graph neural networks robust? In Neural Information Processing Systems, NeurIPS, 2022\"}",
"{\"title\": \"Global Response Part 2\", \"comment\": \"**3. Different initializations produce different models and there is no reason to assume these should have the same structures**\\n\\nThe idea that corresponding layers of architecturally similar models show high relative structural similarity is not novel to NSA but is a well-established benchmark for evaluating similarity metrics. This approach was first introduced by Kornblith et al.[1] in their seminal work on CKA, and it has since been validated and adopted by several subsequent studies [2,3,4,5,6,7]. The idea of similar generalization and performance between networks trained with different seeds has been extensively examined by Thomas et al.[8]. Their experiments with 100 BERT models confirm that, while functional differences may lead to significant variability in out-of-distribution (OOD) performance, networks trained to convergence on the same dataset exhibit highly similar structural representations, particularly for in-distribution data. This idea is key to the validity of not only the layerwise analysis presented in similarity metric papers but also to the grounding tests proposed by Ding et al [7]. This structural alignment is a key factor contributing to the consistent performance of such networks on in-distribution tasks. \\n\\nAll our specificity tests are conducted exclusively on in-distribution data, ensuring that our analysis aligns with the well-validated findings from prior literature. We have fixed the error with the figures in Section 4.1 and also improved the writing on Specificity tests. We also include an in-depth layerwise breakdown of the specificity test in Appendix U.\\n\\nIn our initial presentation, we claimed that a good structural similarity index should show high similarity between architecturally identical networks trained with different weight initializations. We have revised this claim to a more nuanced one: a good similarity index should exhibit the highest relative similarity for corresponding layers in architecturally identical networks with different initializations, compared to non-corresponding layers.\", \"we_present_a_detailed_changelog_of_all_the_improvements_made_to_the_manuscript_in_this_revision\": [\"Fixed the error in Figure 1 (left) which had the same figure for all 3 metrics.\", \"Improved the readability of some figures by replacing them with higher quality versions\", \"Added reference to Figure 3 that was missing\", \"Rewrote Section 4.1.1 and improved the clarity of Section 4.1.2\", \"Added additional Specificity Tests in Appendix U\", \"Added ablation studies visualizing the effect of k on the performance of LocalNSA in Appendix Q.\", \"Added explanations for the choice of k and figures demonstrating empirical convergence of LocalNSA in mini batching scenarios to -Appendix Q.\", \"Added reconstruction results of NSA-AE on the Mammoth Dataset to Appendix R\", \"-Added GCN performance on Link Prediction at different dimensionalities in Section 4.3\", \"Added heatmaps showing cross layer similarity performance on a subset of the ImageNet dataset (100K images) on ResNet-18 and ResNet-34 to show NSA\\u2019s performance on large datasets in Appendix I\", \"Added a plot to show the variation of mean NSA and standard deviation of Global NSA and Local NSA to show the standard deviation of both metrics is significantly lower when working with subsets in Appendix Q\", \"Added results on NSA\\u2019s correlation to misclassification rate in CNNs in Appendix I\", \"Added visualizations on the 1D spiral in 2D Space dataset to provide simple visual explanations on how LNSA and GNSA reflect the ground truth when different types of transforms are applied to the data in Appendix Z.\", \"[1] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. IN ICML 2019\", \"[2] Serguei Barannikov, Ilya Trofimov, Nikita Balabin, and Evgeny Burnaev. Representation topology divergence: A method for comparing neural network representations. In ICML 2022\", \"[3] Enric Boix-Adsera, Hannah Lawrence, George Stepaniants, and Philippe Rigollet. 2022. GULP: a prediction based metric between representations. In NeurIPS.\", \"[4] Zuohui Chen, Yao Lu, Wen Yang, Qi Xuan, and Xiaoniu Yang. 2021. Graph-Based Similarity of Neural Network Representations. ArXiv preprint (2021)\", \"[5] Trofimov, I., Cherniavskii, D., Tulchinskii, E., Balabin, N.,Burnaev, E., and Barannikov, S. Learning topology preserving data representations. In The Eleventh International Conference on Learning Representations, 2023.\", \"[6] Klabunde, M., Schumacher, T., Strohmaier, M., and Lemmerich, F. Similarity of neural network models: A survey of functional and representational measures.\", \"[7] Ding.,et al. Grounding representation similarity through statistical testing. Advances in Neural Information Processing Systems, 2021\", \"[8] R. Thomas McCoy, Junghyun Min, Tal Linzen. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. 2020 BlackboxNLP workshop\"]}",
"{\"comment\": \"**3. No ablation studies are provided**\\n\\nWe would like to clarify that detailed ablation studies are provided in Appendix Q (Appendix P before revision), focusing on the effect of $l$ and $g$ on NSA. Our quantitative analysis demonstrates that while GNSA alone performs well for downstream tasks, combining it with LNSA yields even better results. Additionally, we include visual ablations using the Spheres dataset to show how different combinations of l and g influence the structure of complex representation spaces. As part of the revision, we have also added ablations visualizing the impact of different values of $k$ on NSA\\u2019s performance at various batch sizes, providing further insights into the method's parameter behavior.\\n\\nWe kindly direct the reviewer to **Global Response 1** where we discuss the parameter tuning requirements for NSA. The presented ablations for $l$,$g$ and $k$ in Appendix Q show that NSA is mostly effective on a large range of hyperparameter values. We also added visual ablations in Appendix Q on the swiss roll dataset to demonstrate how the value of $k$ could potentially affect convergence.\\n\\n**4. Definition of GNSA depends on the choice of the origin.**\\n\\nThank you for your comment. It is true that GNSA depends on the choice of origin, but this concern is easily manageable in practice and does not pose a significant limitation for NSA\\u2019s intended applications:\\n1. **Normalization is Standard Practice:** In most neural network workflows, normalizing data to center it around the origin is a routine step, either as part of preprocessing or within the training pipeline. For example, when comparing two spaces, we can normalize both to share a common origin (e.g., centering them at (0,0)) without affecting the structural properties being measured. This ensures NSA scores remain invariant to translations.\\n2. **Practical Scenarios:**\\n- In applications like dimensionality reduction, knowledge distillation, or training a lower-dimensional space using a higher-dimensional reference, the origin of the reference space is constant and can be explicitly accounted for. In these tasks, the exact position of the representation space is typically secondary to its structure, and alignment via normalization is sufficient to address origin dependency.\\n- When performing static analysis, NSA only cares about the structure of the space and not its absolute position. Since we always have access to the origin of the space, it is easy to normalize the space. NSA\\u2019s formula also accounts for the origin of the space not being 0,0 (as mentioned in footnote 2 on Page 4)\\n\\n**5. Figures are not readable.**\\n\\nThank you for pointing this out. We have replaced some of the figures in the manuscript with higher quality figures. Could you please let us know which figures specifically have this issue so we can improve it?\\n\\n**6. Doesn't GNSA depend on specific order of point clouds?**\\n\\nThe computation of GNSA indeed relies on a consistent ordering of the point clouds, as \\na_i and b_i are assumed to correspond to the same data point in the two spaces being compared. This requirement is inherent to all pairwise structural similarity measures[1,2,3,4,5,6]. In practice, ensuring consistent ordering is straightforward in scenarios where the two spaces are derived from the same dataset (e.g., embeddings from two neural networks trained on the same data), the ordering is naturally preserved during computation. \\n\\n**7. Different initializations produce different models and there is no reason to assume these should have the same structures.**\\n\\nThank you for your feedback. We direct the reviewer to **Global Response 3** where we address this query in detail. \\n\\n**8. In Figure 1 all the plots on the left are exactly the same.**\\n\\nThank you for pointing this out. This is a typo. We have fixed this plot in the revised version of the manuscript.\"}",
"{\"title\": \"Global Response Part 1\", \"comment\": \"We are sincerely grateful to all reviewers for their thoughtful feedback and suggestions, which we believe are very beneficial for our work. Your suggestions have helped improve our manuscript and below we present responses to a few common queries along with a description of the changes made to the manuscript in this revision.\\n\\n**1. NSA requires significant fine-tuning of hyperparameters and ablation studies are missing**\\n\\nWe appreciate the reviewers' concerns regarding the dependency of NSA on the hyperparameters $l$, $g$ and $k$. While NSA includes three parameters, $l$ and $g$ are straightforward to tune as they simply control the balance between local manifold preservation and global geometric structure preservation. In most cases, unless one is weighted significantly more than the other, LNSA and GNSA complement each other to produce consistent results. Across all experiments where both components are used together, we set \\\\($l$= 1\\\\) and \\\\($g$ = 1\\\\). Ablation studies provided in Appendix Q demonstrate that GNSA alone performs satisfactorily for both dimensionality reduction and downstream tasks, while LNSA alone is less effective. However, combining both components produces the best results. Additionally, we include visual results using the Spheres dataset to show that the encoder successfully preserves the structure of the dataset across a wide range of $l$ and $g$ values.\\n\\nThe parameter $k$ requires more nuance than $l$ and $g$, as it controls the number of nearest neighbors considered for LNSA. Unless one aims to optimize local structure preservation at a specific scale, most $k$ values perform well to preserve local structure. In the revised manuscript, we provide ablation studies on $k$, including visual results on the Swiss Roll dataset, illustrating how varying $k$ affects the degree of structure preservation. Additionally, we present empirical results demonstrating LNSA\\u2019s convergence to its global value under mini-batching with the appropriate $k$ value. We also offer guidance on selecting an optimal $k$ to align with the desired level of local structure preservation.\\t\\n\\n\\n**2. Clarifications on Section 4.3 (Downstream Task Analysis)** \\n\\nIn Section 4.3, we evaluate NSA\\u2019s ability to preserve critical structural information during dimensionality reduction by performing a link prediction task on the lower-dimensional embeddings. The embeddings are generated by reducing the original high-dimensional representation space using NSA-AE, and we assess their quality using ROC-AUC. Even in the reduced dimensional space, there is a one-to-one mapping between nodes, allowing us to compute ROC-AUC using the original graph's positive edge indices and an equal number of sampled negative edge indices. The dot product of the embeddings determines the probability of a link existing between two nodes.\\n\\nThis experiment is not intended to showcase NSA as a supplementary loss function for improving link prediction but rather to highlight its ability to retain the structural integrity of the representation space after dimensionality reduction. While retraining a GCN in the reduced dimension may be feasible for smaller models, this approach becomes impractical for larger models where pretraining is computationally expensive. Instead, NSA-AE offers a computationally efficient alternative by ensuring that the relative distances and contextual information in the reduced-dimensional space remain preserved, yielding performance comparable to a newly trained model as shown in Table 2. \\n\\nTo clarify, link prediction is not the ideal task for NSA, as it requires a reference space to align to. For link prediction, directly training a GCN is simpler and more efficient than introducing NSA-AE. NSA is a metric for comparing representations across two spaces. It can be used to explain the performance of machine learning tasks. It can also be used as a loss function that can be differentiated and estimated quickly in mini batches and will be much more effective in tasks like dimensionality reduction, adversarial training, and knowledge distillation, where preserving representation structure or aligning to a reference space is critical. But it is up to a method to decide how to use it for solving a specific machine learning problem (in the same spirit as other similarity metrics such as RTD and CKA).\"}",
"{\"summary\": \"The paper introduces Normalized Space Alignment(NSA), a new manifold analysis technique designed to compare neural network representations; NSA compares pairwise distances between point clouds from the same data source but with different dimensionalities. NSA is proposed as both a differentiable loss function and a similarity metric, and it is computationally efficient. The paper demonstrated the NSA's versatility in representation analysis, structure-preserving tasks, and robustness testing against adversarial attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) NSA can be used both as a loss function and a similarity metric across different applications\\n2) NSA is designed to work efficiently in large-scale applications with a quadratic complexity that is better than some existing methods \\n3) It is also effective in preserving structural characteristics and identifying vulnerabilities in neural networks, even under adversarial attacks\\n4) the paper provides a thorough analysis with multiple experiments and comparisons to other methods like RTD, CKA validating NSA's effectiveness\", \"weaknesses\": \"1) The reliance on Euclidean distance as a primary metric may limit performance in high dimensional spaces due to curse of dimensionality\\n2) NSA is versatile but may not require careful tuning and modifications to work effectively in specific scenarios\\n3) The limitations of NSA are not explored beyond high-dimensionality issue\", \"questions\": \"1) How does NSA perform in extremely high-dimensional spaces where Euclidean distance is known to be problematic? Are there alternative distance metrics that could be integrated into NSA?\\n2) How sensitive is NSA to parameter settings, and what are the best practices for tuning it in different applications (e.g., adversarial robustness vs. dimensionality reduction)?\\n3) Given the versatility of NSA, do you envision any specific areas where its application would be limited or challenging?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We sincerely thank the reviewer for engaging in a discussion and their feedback.\\n\\n**1. Clarification on Geodesic Distance computation**\\n\\nWe use the Floyd-Warshall algorithm to approximate geodesic distances, ensuring that the pairwise shortest path distances on the manifold are accurately represented. We first create a k-nearest neighbors graph from the dataset then compute the all pairs shortest path using the FW algorithm. Given that the reference space remains fixed, this is a one time preprocessing cost. And since this matrix inherently reflects Euclidean distances when the representation space is reduced to its manifold dimension, we can use NSA with no modifications to the metric to operate as a loss function. We have added a line in Appendix R mentioning the algorithm used. We hope this clarifies the process used to perform the swiss roll manifold approximation experiment. Please let us know if you require additional details.\\n\\n**2. Addition of toy dataset experiments**\\n\\nWe did perform experiments with a few toy datasets to visually observe the ability of NSA to preserve the structure of the representation space.\\n\\n- Figure 14,15 and 16 showcase NSA\\u2019s ability to preserve local and global structure with the 3D Swiss Roll, Spheres and Mammoth Dataset\\n- In the latest revision we also added visualization experiments with a 1D spiral in 2D space in Appendix Z, where we introduce targeted transformations to the original space (we perform global structure preserving transforms, manifold preserving transforms, structure altering transforms and manifold altering transforms) and present the GNSA and LNSA values to demonstrate their invariances and properties. Both GNSA and LNSA reflect the changes made to the ground truth based on their definitions i.e GNSA changes when the global structure is modified, LNSA changes when local structure is changed. \\n\\nWe hope this satisfies the reviewer\\u2019s concerns on using Toy Datasets to visualize the effects of NSA.\"}",
"{\"metareview\": \"The authors propose normalized space alignment (NSA) as an analysis technique for neural network representations. NSA combines the global NSA (GNSA) which compares the pairwise Euclidean distances in the two representations, and local NSA (LNSA) which measures dissimilarity from k-NN graph. The proposed NSA can be applied as an analytical tool and a loss function. The authors evaluate the proposed method on various datasets for these aforementioned tasks.\\n\\nAlthough the Reviewers think the proposed approach is interesting, the Reviewers raised concerns on the choice of Euclidean distance in GNSA (for such manifold analysis technique), sensitivity of the k-NN graph in LNSA, scalability for large datasets. The Reviewers also raised concerns on the evaluation without ground truth for manifold analysis task. Additionally, the Reviewers also question about empirical evidences on the advantages of the proposed approach in applications, it is better to compare the proposed approach with recent baselines for the corresponding tasks. Overall, we think that the submission is not ready for publication yet. The authors may consider the Reviewers' comments to improve the submission\", \"additional_comments_on_reviewer_discussion\": \"The Reviewers raised several concerns about the proposed method, especially on the choice of Euclidean metric for GNSA for manifold analysis, and sensitivity of k-NN graph in LNSA. Additionally, the empirical evidences do not convince the Reviewers yet, several concerns are raised as listed above in the meta-review.\"}",
"{\"comment\": \"In my review I suggested including toy datasets to more concretely evaluate the performance of the method as a measure of structural dissimilarity. \\\"Such datasets could be perturbed in ways that did change their topology and structural relationships vs. others that did not, the goal being to check whether the values produced by LNSA and GNSA would reflect the truth.\\\"\\n\\nDid the authors have the chance to perform any of these structural perturbation experiments as means of providing ground truth? I could not find them in the revised manuscript.\"}",
"{\"title\": \"Friendly Reminder\", \"comment\": \"Dear Reviewer,\\n\\nThe revision period for the rebuttal will be ending soon. We would be extremely grateful if you could take the time to review our rebuttal and let us know if it has resolved all of your concerns. If you have any further questions, we would be happy to answer them.\\n\\nSincerely,\\n\\nThe Authors\"}",
"{\"comment\": \"**4. GNSA can have problems with Swiss Roll and similar datasets**\\n\\nThank you for your valuable feedback. We agree that GNSA, as currently formulated with Euclidean distances, preserves the exact geometric structure rather than the topology of the manifold. This design choice means that NSA is sensitive to changes in global distances, as observed in cases like the Swiss Roll dataset when flattened. Appendix R specifically addresses this situation, highlighting that NSA\\u2019s use of Euclidean distance is intended to preserve geometric fidelity rather than topological structure.\\n\\nTo extend NSA\\u2019s applicability to manifold topology, one can substitute Euclidean distance with geodesic distance. This modification enables NSA to capture the underlying manifold by approximating geodesic distances between points, preserving topological similarity while maintaining all NSA properties. Appendix R compares NSA-AE\\u2019s performance on the swiss roll dataset when it aims to minimize euclidean distances vs geodesic distances. \\n\\n**5. Addition of toy and visualization focused datasets**\\n\\nThank you for your feedback. In our study, we incorporate several well-established visualization focused datasets to validate NSA\\u2019s efficacy and to demonstrate its flexibility across different use cases. We also added results from the Mammoth dataset taken from [5] in the revision. Please find below a list of visualization focused datasets in the manuscript:\\n\\n- In Section 4.2, we utilize the COIL-20 dataset to evaluate NSA\\u2019s ability to preserve structural similarity during dimensionality reduction.\\n- The Swiss Roll dataset is used to illustrate the difference between geometric and topological preservation. By comparing NSA with Euclidean distances to NSA with geodesic distances, we show how the choice of distance metric influences NSA\\u2019s ability to capture manifold structure. The Swiss Roll dataset is also used to perform ablation studies on the parameter $k$ of LNSA.\\n- In Appendix R, we analyze NSA\\u2019s performance on toy datasets like Spheres and Mammoth to evaluate its capacity to maintain both global and local structure. Additionally, we also use the Spheres dataset to perform visual ablation studies in Appendix Q that demonstrate how different parameter choices affect NSA\\u2019s performance. \\n- In Section 4.4 we demonstrate how NSA can pinpoint sources of discrepancy by switching to a node wise variation of NSA (refer to the formula here). This showcases NSA\\u2019s ability to not only identify structural discrepancies but also pinpoint the source of these deviations. \\n\\n**6. Mean and Standard Deviation values for Figure 1 and Figure 3**\\n\\nThank you for your feedback. As previously stated, we compute average similarity over 10 trials for each metric when using it in subsets. For Figure 1, we average the performance across 3 separate runs (6 models trained with different seeds forming 3 pairs of comparisons). The results on mean and standard deviation values across runs and the experimental setup is presented in Appendix U. In Figure 3 we run our experiments on the Cora dataset which has 2708 nodes and hence we do not need to use subsets to compute NSA. We have clarified in the main text\\n\\n**7. Clarifications on the sensitivity test**\\n\\nThank you for the question. In principle, the idea proposed by Ding et al.[6] states that as principal components are sequentially removed, starting with the lowest-variance components, the dissimilarity score should increase. A good metric should be sensitive to the removal of these high-variance components. Figure 1 demonstrates that NSA\\u2019s dissimilarity score rises significantly when high-variance PCs are removed, indicating greater sensitivity to impactful structural changes. In contrast, RTD and CKA show a flatter response, reflecting reduced sensitivity until the most significant components are removed. In line with Ding et al.'s[6] setup, we perform this analysis on the representations extracted from the first and last layers of a neural network.\\n\\nSince different metrics operate within different ranges, the absolute values of the curves may not be directly comparable. To address this, [6] proposed a detection threshold, which defines the dissimilarity score above which changes become \\u201cdetectable.\\u201d This baseline is the dissimilarity score between representations from differently initialized networks, serving as a reference for determining when structural changes due to PC removal become significant.\\n\\nThe main text refers to the average percentage of principal components that can be removed before crossing the detection threshold. The confusion arose because we referred to this average percentage as the detection threshold itself. We have updated the text to clarify this distinction.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Friendly Reminder\", \"comment\": \"Dear Reviewer,\\n\\nThe revision period will be ending soon. We would be extremely grateful if you could take the time to review our rebuttal and let us know if it has resolved all of your concerns. If you have any further questions, we would be happy to answer them.\\n\\nSincerely,\\nThe Authors\"}",
"{\"comment\": \"We thank the reviewer for their feedback and are glad that they found the applicability of the method and background on LID satisfactory. We address their concerns below:\\n\\n**1. Lack of comparison to relevant existing methods and relevant baselines**\\n\\nThank you for your feedback regarding comparisons to existing methods in link prediction and adversarial attack experiments. We appreciate the opportunity to clarify the intent and scope of these experiments.\\n\\n**General Scope of NSA:** NSA is primarily a metric for comparing representations across two spaces. It can explain the performance of machine learning tasks and serve as a differentiable loss function that can be efficiently estimated in minibatches. However, NSA itself does not prescribe how it should be applied to specific machine learning problems. Its usage, like that of other similarity metrics such as RTD and CKA, depends on the method or task in question.\\n\\n**Link Prediction Experiments:** The link prediction experiment was not designed to showcase NSA as a supplementary loss function for improving link prediction but rather to demonstrate NSA\\u2019s ability to retain the structural integrity of representation spaces after dimensionality reduction. For smaller models, retraining a GCN in the reduced dimension is feasible. However, for larger models where pretraining is computationally expensive, NSA-AE provides a more practical alternative by ensuring that the reduced-dimensional space preserves relative distances and contextual information. As shown in Table 2, the performance of NSA-AE on reduced-dimensional data is comparable to that of a newly trained model.\\n\\nIn response to your suggestion, we have added results in Table 2 showing how a base GCN performs on link prediction as a reference. However, it is important to emphasize that this reference is not a baseline NSA is competing against but rather a contextual benchmark to understand NSA\\u2019s behavior. For more details, please refer to **Global Response 2**, where we discuss this experiment in depth.\\n\\n**Adversarial Analysis Experiments:** In the adversarial analysis experiments, NSA is not used as a defense mechanism but as a post-hoc analysis tool to evaluate the structural impact of attack and defense methods. While NSA could potentially serve as a structure-preserving loss function to facilitate robust neural network training, it is outside the scope of this manuscript which presents a breadth focused introduction to NSA. We leave the idea of using NSA to facilitate robust training of neural networks to future work. Instead, the goal is to introduce NSA\\u2019s ability to correlate with disagreements in predictions and to perform fine-grained, nodewise analysis.\\n\\nIt is worth noting that NSA has a unique capability in this regard. Unlike the next best similarity metric, RTD, which analyzes topological features of spaces, NSA can examine individual data points and quantify their local discrepancies. This makes NSA particularly suited for tasks requiring high-resolution analysis of adversarial perturbations.\\n\\n**2. Datasets used in the experiments are small and basic, and the generalization of the method is questionable.**\\n\\nThank you for your concern regarding the datasets used in our experiments. We address the generalizability and scalability of NSA by presenting results on datasets of varying scale and domain:\\n\\n1. Dataset Variety: Our experiments span visualization-focused datasets (e.g., COIL-20, Spheres, Swiss Roll, Mammoth), benchmark datasets (e.g., MNIST, F-MNIST, CIFAR-10, Cora, Citeseer, Pubmed), and large-scale datasets (e.g., ImageNet, WordNet, Amazon Computers with 13K nodes and 500K edges, Flickr with 90K nodes and 900K edges). These datasets cover diverse domains, including images, natural language processing (NLP), and graphs. We provide detailed statistics for each dataset in Appendix P.\\n2. Scalability and Generalizability: The variety in our dataset choices demonstrates NSA\\u2019s scalability and generalizability across domains and tasks. Generalizability is evidenced by NSA\\u2019s effectiveness across different dataset types and scales, which is a key focus of this paper.\\n3. Theoretical Guarantees: In Section 4.2.1, we provide theoretical guarantees showing that GNSA converges to the global dataset value irrespective of dataset size when using mini-batching. This ensures NSA remains effective even for large-scale datasets.\", \"additional_experiments\": \"4. Empirical results: In Appendix I, we present layerwise experiments on ResNet models evaluated on 100K ImageNet images.\\nIn Appendix Q, we provide ablation studies for GNSA and LNSA on ImageNet, showing that with mini-batching (using multiple trials on subsets), NSA can accurately approximate global values even for large datasets.\\n\\n\\nWe believe these results comprehensively demonstrate NSA\\u2019s scalability, generalizability, and robustness across a wide range of datasets and tasks.\"}",
"{\"comment\": \"**8. Line 435. Clarification on how the embeddings are tested for link prediction, using ROC-AUC**\\n\\nThe latent embeddings are tested for link prediction by computing the dot product between the embeddings of node pairs, which was the method used during the original GCN training to minimize the link prediction loss. The computed values are then compared to the ground truth labels (consisting of an equal amount of positive and negative edge indices) to evaluate performance using metrics like ROC-AUC.\", \"the_dot_product_reflects_the_likelihood_of_an_edge\": \"closer nodes in the embedding space produce higher dot products, indicating a higher edge probability. We use RTD-AE and NSA-AE to reduce the dimensionality of the original representation space and test the reduced space in the same manner. Since the reduced space maintains a one-to-one mapping with the original, a good structure-preserving method should retain the relative distances between node representations. This is quantified by measuring the ROC-AUC score in the reduced space, reflecting how well the structural integrity is preserved. We direct the reviewer to **Global Response 2** for additional clarification on Section 4.3\\n\\n\\n**9. Dissimilarity vs similarity**\\n\\nThank you for pointing this out. We have added a footnote in the paper to denote that we use the term similarity index colloquially throughout the paper to refer to metrics that quantify relationships between representation spaces. We have also identified and changed the phrasing wherever we use the term similarity when talking about NSA computations in section 4.1. We will identify and change the phrasing throughout the manuscript in the final version.\\n\\n**10. Line 497. Why Robust GNNs and robust to what exactly?**\\n\\nThank you for the question. By \\\"robust,\\\" we refer to GNN variants designed to improve resilience against adversarial attacks or noisy input data. These models either have preprocessing steps or built in training mechanisms to detect and remove perturbations with varying degrees of effectiveness. We chose robust variants of GNNs for two reasons specifically.\\n1. Gradient in Performance: Robust GNN variants exhibit varying degrees of resilience to adversarial perturbations. This allows us to evaluate NSA\\u2019s sensitivity across a gradient of robustness, providing a nuanced view of how structural discrepancies manifest in models with different levels of inherent robustness.\\n2. Comparison with Related Work: By using a similar set of architectures to Mujkanovic et al. (2022) [7], we align our experimental setup with prior works in adversarial robustness. This enables us to correlate our findings with theirs. \\n\\n**11. What do the perturbation percentages mean?**\\n\\nThe perturbation percentages represent the proportion of edges added to the graph during an adversarial attack. For example, a 10% perturbation means that edges equal to 10% of the total number of edges in the original graph were added. We have clarified in the main text that the perturbations involve only edge additions and will ensure this is explicit in the final version of the manuscript.\\n\\n**12. No pointer to Figure 3**\\n\\nThank you for pointing this out. We have fixed it.\"}",
"{\"comment\": \"We thank the reviewer for their feedback. We address their concerns below:\\n\\n**1 Dependence on Euclidean distance in high dimensional spaces might not be effective in high dimensional spaces**\\n\\nWe appreciate the reviewer\\u2019s observation regarding the potential limitations of Euclidean distance in high-dimensional spaces due to the curse of dimensionality. While it\\u2019s true that Euclidean distance can face challenges in high-dimensional settings, we chose it deliberately for a few key reasons: \\n\\n1. **Prevalent Use Across Neural Network Tasks:** Euclidean distance remains a widely adopted metric in representation learning, particularly in tasks involving contrastive and triplet losses, where it has shown strong empirical performance. Many state-of-the-art neural network architectures still rely on Euclidean distance for computing similarity between embeddings, even in high-dimensional spaces.\\n2. **Empirical Robustness:** In our experiments, NSA demonstrated robust performance across multiple datasets and tasks, even when applied to high-dimensional embeddings (we test up to 200,000 dimensions). This suggests that, while Euclidean distance may not be theoretically optimal under the curse of dimensionality, it continues to yield practical benefits in diverse applications, including representation analysis and alignment tasks.\\n3. **Potential for Adaptability:** The NSA framework is flexible enough to accommodate alternative distance metrics if future work finds this necessary. In this study, we prioritized Euclidean distance for its simplicity, efficiency, and empirical success in maintaining local and global structural alignment across different model architectures.\\n\\nAdditionally we include a local component for NSA for this specific purpose. LocalNSA only looks at the k-nearest neighborhood of a point and can alleviate some of the discrepancies that can arise from GlobalNSA running at high dimensionalities.\\n\\n**2. No Access to source code**\\n\\nThank you for your feedback. We do provide access to our source code in Appendix T (Appendix R in the un-revised version), where detailed instructions for reproducing our results are included. We will update the codebase with the experiments performed for the revision and update the anonymous repository soon.\\n\\n**3. Lack of universality without parameter tuning. NSA's performance across different tasks relies heavily on parameter tuning and specific integration with other loss functions. The choice of k in the construction of k-nn graph is essential in the definition of LNSA. The weights in front of the local and global parts of NSA clearly lead to drastically different results depending on their values.**\\n\\nThank you for your feedback. We kindly direct the reviewer to **Global Response 1** where we discuss the parameter tuning requirements for NSA. We present extensive ablations for $l$,$g$ and $k$ in Appendix Q where we show that NSA is mostly effective on a large range of values. We also added visual ablations in Appendix Q on the swiss roll dataset to demonstrate how the value of $k$ could potentially affect convergence.\\n\\n**4. No thorough guidance is provided for the choices and tuning of these hyperparameters. For example how to do 'appropriately adjusting the number of nearest neighbors considered in each mini-batch' on line 366 remains unspecified.**\\n\\nThank you for your feedback. We have included an experiment in Appendix Q demonstrating empirical convergence of LNSA with the right value of $k$ and best practices for selecting $k$. As stated in Section 4.2.1 LNSA does not formally converge with mini-batching but has several favorable properties that help it perform well. For example, consider the scenario without mini-batching, where we examine a k-sized neighborhood of each point. When using mini-batches of size N/10, in expectation, we have k/10 points from the original neighborhood present in each mini-batch. Therefore, using LNSA with the number of nearest neighbors set to k/10 results in comparable outcomes to the non-mini batched case. By extension this applies to any fraction. We demonstrate this in Figure 12.b in Appendix Q.2\"}",
"{\"comment\": \"We sincerely thank the reviewer for taking the time to review our rebuttal in detail. We present responses to their queries below:\\n\\n**1. I do not understand what Figure 11 is showing, could you please provide context?**\\n\\nThank you for your feedback. Figure 11 illustrates the effect of varying the parameters $l$ and $g$ in the NSA formulation. The experiment involves reducing the Spheres dataset (comprising 10 spheres within an 11th sphere) from its original 100 dimensions to 2 dimensions. Figure 15 (c) plots the first 3 dimensions of the original dataset for reference.\\n\\nUsing an NSA-AE trained solely with the NSALoss, we systematically vary $l$ and $g$ to observe how these parameters influence the dimensionality reduction visually. The figure demonstrates how different combinations of $l$ and $g$ affect the structural preservation of the dataset during this reduction.\\n\\nWe have added additional explanatory text to the relevant subsection and caption to clarify this further. We hope this resolves any confusion regarding the purpose and interpretation of Figure 11.\\n\\n**2. The influence of the choice of k and clarifications on selecting k.**\\n\\nFigure 13 demonstrates how LocalNSA performs when varying both the $k$ value and the width of the Swiss Roll. As the width increases, we observe that a slightly larger $k$ value is required to preserve the manifold effectively. This occurs because increasing the width spreads the data points further apart, reducing the density of local neighborhoods. When $k$ is too small, the immediate neighbors of a point may fail to fully capture the local structure, leading to distortions in the reconstructed manifold. By slightly increasing $k$, we include more neighbors, compensating for this sparsity and ensuring that the manifold remains well-preserved. \\n\\nLocalNSA\\u2019s primary objective is to preserve local neighborhoods, and despite the changing width in Figure 13, it successfully achieves this for a wide range of $k$ values, failing slightly only when $k$ is extremely small and width is very high. The minimum requirement for preserving the manifold of the Swiss Roll is maintaining its 2D plane structure and local point neighborhood consistency. The variations in global structure observed in Figure 13 arise because LocalNSA operates without access to global information. Consequently, it converges as soon as it reconstructs the manifold perfectly. This reconstruction does not necessarily have to resemble the Swiss Roll\\u2019s original 3D geometry, as long as the manifold\\u2019s local properties are preserved. We demonstrate this using a gradient-based coloring scheme: Each point on the Swiss Roll is labeled with a color gradient that changes minimally along the manifold. In the reconstructed figure, this gradient remains consistent, confirming that while the global geometry may be distorted, the local manifold structure is effectively preserved.\\n\\nAlmost all the variability across the $k$ values in Figure 13 can be explained by the fact that LNSA does not preserve global structure and that the swiss roll does not align the same way in different reconstructions (since LNSA is invariant to scaling and transformations it does not preserve these when reconstructing the structure) with some minor variance explained due to the increasing width of the swiss roll.\\n\\n***On selecting k***\\n\\nThe guidance provided in line 1695 is not for determining an optimal absolute $k$, but rather for maintaining an optimal relative $k$ when working with mini-batches. Specifically:\\n- If the dataset size is $N$ and you wish to preserve local neighborhoods up to $k_{global}$ nearest neighbors in the full dataset, the value of $k$ in mini batches ($k_{mini}$) should be chosen to maintain the following proportion:\\n$$\\\\[\\n\\\\frac{N}{k_{\\\\text{global}}} = \\\\frac{\\\\text{mini-batch size}}{k_{\\\\text{mini}}}\\n\\\\]$$\\nThis ensures that the mini-batch LNSA value approximates the global LNSA value\\n\\n- There is no specific value of $k$ that works \\\"best\\\" for the swiss roll experiment in Figure 13 as we do not place any constraints on approximating the mini batch LocalNSA\\n- For example, if we were working with the Swiss Roll dataset from Figure 13 and had specific local neighborhood consistency constraints, for a full dataset size of 20480 and a mini batch size of 128, we would set the values using the following proportionalities:\\n - Set $k_{mini} = 2$ if we want $k_{global} = 320$ (dataset size: $k_{global}$ = 64)\\n - Set $k_{mini} = 5$ if we want $k_{global} = 800$ (dataset size: $k_{global}$ = 25.6)\\n - Set $k_{mini} = 10$ if we want $k_{global} = 1600$ (dataset size: $k_{global}$ = 12.8)\\n\\nThus if you had a specific degree of local neighborhood preservation you wanted to guarantee on your dataset, you could do so by appropriately adjusting the $k$ value to the mini-batch size. But for most datasets we observed that a wide range of $k$ values work very similarly in practice.\"}",
"{\"comment\": \"We thank the reviewer for their feedback and for appreciating our work. We address the reviewer's concerns below:\\n\\n**1. The reliance on Euclidean distance as a primary metric may limit performance in high dimensional spaces due to curse of dimensionality**\\n\\nWe appreciate the reviewer\\u2019s observation regarding the potential limitations of Euclidean distance in high-dimensional spaces due to the curse of dimensionality. While it\\u2019s true that Euclidean distance can face challenges in high-dimensional settings, we chose it deliberately for a few key reasons: \\n\\n1. **Prevalent Use Across Neural Network Tasks:** Euclidean distance remains a widely adopted metric in representation learning, particularly in tasks involving contrastive and triplet losses, where it has shown strong empirical performance. Many state-of-the-art neural network architectures still rely on Euclidean distance for computing similarity between embeddings, even in high-dimensional spaces.\\n2. **Empirical Robustness:** In our experiments, NSA demonstrated robust performance across multiple datasets and tasks, even when applied to high-dimensional embeddings (we test up to 200,000 dimensions). This suggests that, while Euclidean distance may not be theoretically optimal under the curse of dimensionality, it continues to yield practical benefits in diverse applications, including representation analysis and alignment tasks.\\n3. **Potential for Adaptability:** The NSA framework is flexible enough to accommodate alternative distance metrics if future work finds this necessary. In this study, we prioritized Euclidean distance for its simplicity, efficiency, and empirical success in maintaining local and global structural alignment across different model architectures.\\n\\nAdditionally we include a local component for NSA for this specific purpose. LocalNSA only looks at the k-nearest neighborhood of a point and can alleviate some of the discrepancies that can arise from GlobalNSA running at high dimensionalities.\\n\\n\\n**2. NSA is versatile but may require careful tuning and modifications to work effectively in specific scenarios**\\n\\nCertain applications may benefit from tuning NSA parameters to optimize performance. However, our findings suggest that NSA requires minimal parameter adjustments compared to traditional manifold learning techniques such as ISOMAP, where parameter selection can heavily impact results. NSA\\u2019s tuning process is straightforward and produces only incremental gains, indicating that the method performs robustly without extensive parameter optimization.\\n\\nTo provide further transparency, we included ablation studies in Appendix Q to identify the optimal parameter configurations for NSA and present a more detailed response to the need for parameter tuning in **Global Response 1**.\\n\\n**3. The limitations of NSA are not explored beyond high-dimensionality issue**\\n\\nThank you for raising this point. We acknowledge that while NSA has proven effective across a variety of tasks, it has a few limitations worth noting:\\n\\n1. **Minor Parameter Tuning:** NSA requires minimal parameter tuning, though, as shown in our ablation studies (Appendix Q), these adjustments result in only small gains compared to manifold learning methods like ISOMAP, which are more sensitive to parameter selection.\\n\\n2. **High-Dimensional Euclidean Distance and Geometric vs. Topological Similarity:** As noted, NSA leverages Euclidean distance, which can be affected by high-dimensionality. However, the versatility and widespread success of Euclidean distance in neural network applications make it a pragmatic choice for NSA. To address this, we present studies in Appendix R where we evaluate the reconstruction of a Swiss Roll using NSA-AE and demonstrate how geodesic distance can be used as replacement for euclidean distance in high dimensional spaces or in spaces where the manifold lies in a lower dimension than the original data. NSA-AE with geodesic distance as the distance measure can unravel complex manifolds.\\n\\n3. **Inability to measure functional similarity:** NSA is primarily a structural similarity index and is therefore incapable of measuring functional similarity accurately. We illustrate this with cross architecture experiments in Appendix V. While NSA shows promise, it is incapable of perfectly capturing layerwise similarity even though both GNN architectures are trained on the same task and on the same dataset making them functionally similar.\\n\\nThese considerations are relatively minor and do not affect NSA's definition and superiority in its domain, and our experimental results demonstrate NSA\\u2019s robustness across datasets and tasks.\"}",
"{\"summary\": \"The paper introduces a method (NSA) for comparing two data representations of the same dataset. NSA is a weighted sum with some tuned weights of GNSA which essentially compares the pairwise euclidian distances in the two representations of the same points, and of LNSA which is a local dissimilarity measure, based on k-NN graph. Experiments are described in order to empirically validate the expected properties of the method, although no access to the source code is provided.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"-The paper addresses an interesting problem of constructing a reasonable measure of dissimilarity of two data representations of the same dataset.\\n\\n-Different experiments are described in order to empirically validate the method, although no source code is provided making reproducibility check difficult.\", \"weaknesses\": \"1.Dependence on Euclidean distance in high-dimensional spaces. NSA uses essentially the comparison of Euclidean distances as its measure of structural similarity. This choice can be suboptimal in high-dimensional spaces due to the \\\"curse of dimensionality,\\\" which makes Euclidean distances less informative and can lead to unreliable similarity measurements.\\n\\n2.An access to the source code is not provided making the paper results reproducibility check difficult.\\n\\n3.Lack of universality without parameter tuning. NSA's performance across different tasks relies heavily on parameter tuning and specific integration with other loss functions. The choice of k in the construction of k-nn graph is essential in the definition of LNSA. The weights in front of the local and global parts of NSA clearly lead to drastically different results depending on their values. \\n\\n4.No thorough guidance is provided for the choices and tuning of these hyperparameters. For example how to do 'appropriately adjusting the number of nearest neighbors considered in each mini-batch' on line 366 remains unspecified.\\n\\n5.High computational complexity for large datasets. Despite claims of efficiency, NSA has a quadratic computational complexity concerning the number of data points, \\\\( O(N^2 D + kND) \\\\). This can become prohibitively expensive as the dataset size grows.\\n\\n6.The method's focus on structural preservation might make it less effective in scenarios where functional similarity is more relevant, limiting its applicability.\\n\\n7.Absence of interpretability mechanisms for practical applications. Although NSA provides a structural similarity measure, it lacks interpretability features that could make its outputs more useful in real-world applications. For instance, it does not offer insights into which specific features or dimensions contribute most to the observed structural discrepancies.\", \"questions\": \"Why an access to the source code was not provided for reproducibility check purposes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"While I appreciate the authors' efforts, in my view the changes to the main text of the manuscript were minimal, despite important concerns raised by me and other reviewers. The material added to the appendices does not convincingly demonstrate how NSA represents a significant advance as a structure representation metric other than in the particular examples chosen. Thus I will maintain my original score.\"}",
"{\"comment\": \"**5. High computational complexity for large datasets. Despite claims of efficiency, NSA has a quadratic computational complexity concerning the number of data points, ( O(N^2 D + kND) ). This can become prohibitively expensive as the dataset size grows.**\\n\\nWhile NSA has quadratic computational complexity with respect to the number of data points, it remains feasible and efficient for large datasets due to its compatibility with mini-batch training. As discussed in Section 4.2.1, NSA\\u2019s global measure can be effectively approximated using mini-batches, similar to stochastic gradient descent (SGD). Lemma G.1 further proves that the expectation of GNSA over mini-batches equals the GNSA of the entire dataset, ensuring that structural discrepancies are accurately captured even in mini-batch settings. With additional experiments in the revised manuscript we demonstrate that this applies for LNSA too. We would like to emphasize that this property of converging under mini batching is unique to NSA and unlike other measures. We show in 4.2.1 that RTD can never converge to its global value as it will always miss the ideal global topological feature when working with mini batches. Additionally, most similarity metrics are not even differentiable and therefore cannot be used as a loss function at all.\\n\\nThis property keeps NSA\\u2019s complexity manageable regardless of dataset size, as long as the mini-batch sizes are not exceedingly large. Practitioners can work with arbitrarily large datasets by iterating over smaller, feasible chunks. As shown in the runtime analysis in Appendix H, NSA computation remains efficient: even for large batch sizes (up to 5000) and very high-dimensional data (~200,000), NSA takes only a few seconds per batch.\\n\\n**6. The method's focus on structural preservation might make it less effective in scenarios where functional similarity is more relevant, limiting its applicability.**\\n\\nThank you for pointing this out. The reviewer is correct in that NSA's focus on structural preservation makes it less effective in functional similarity scenarios. NSA is designed as a structural similarity metric, with the goal of preserving and analyzing the geometric and topological properties of representation spaces. Focusing on structural preservation inherently limits the ability to directly measure functional similarity, as the two concepts often emphasize different aspects of representation. This tradeoff is intrinsic to similarity metrics: prioritizing structural fidelity can make functional relationships harder to capture, and vice versa. Both approaches have distinct strengths and applications.\\n\\nWe also present experiments on layerwise analysis across architectures in Appendix V, where two GNN architectures are trained on the same dataset and task, making them functionally similar but NSA does not identify layerwise correspondence perfectly. Despite this, we demonstrate in our work that structural similarity is highly relevant and viable for a wide range of applications, including dimensionality reduction, adversarial robustness analysis, neural network similarity analysis and representation alignment. In scenarios where functional similarity is more critical, other metrics designed specifically for that purpose may be more appropriate. \\n\\n**7. Absence of interpretability mechanisms for practical applications.**\\n\\nThank you for your feedback. We would like to clarify that NSA does include interpretability mechanisms. Both components of NSA, LNSA and GNSA, are aggregates of nodewise discrepancies. Specifically, GNSA is an aggregate of nodewise global discrepancies, while LNSA aggregates local nodewise discrepancies. By examining the nodewise formulations of NSA (e.g., Equation 8 presents the nodewise equation for GNSA, and LNSA can be similarly expanded), it is possible to pinpoint exactly which nodes contribute most to the overall structural discrepancy.\\n\\nWe demonstrate this interpretability feature in Section 4.4, where we perform a node wise analysis of perturbations. This analysis highlights why SVD-GCN exhibits poor structural similarity and heightened vulnerability, despite showing only a small drop in accuracy when perturbed with adversarial edges. We also demonstrate this for CNNs using ResNets in Appendix I. NSA can identify which images have been perturbed and quantify their contribution to the overall discrepancy. Such pointwise insights make NSA particularly useful for interpreting structural changes in practical applications.\\n\\n\\nWe hope our response clarifies the queries of the reviewer. Please feel free to ask any questions you might have about our work.\"}",
"{\"comment\": \"**4. How does NSA perform in extremely high-dimensional spaces where Euclidean distance is known to be problematic? Are there alternative distance metrics that could be integrated into NSA?**\\n\\nWe have indeed explored NSA\\u2019s performance in high-dimensional spaces through our ResNet-18 experiments (Appendix I), where the hidden layer embeddings, once flattened, reach very high dimensionalities (up to approximately 200,000). Despite these high dimensionalities, NSA has consistently demonstrated strong alignment and similarity retention between representation spaces, indicating robust performance even in challenging settings.\\n\\n**Alternative Distance Metrics:** We acknowledge that alternative distance metrics may offer benefits for certain tasks or extremely high-dimensional spaces. Potential alternatives include:\\n- Cosine Similarity: Often used in high-dimensional embeddings, cosine similarity could serve as an alternative for comparing directional similarity, though it lacks the same geometric interpretation as Euclidean distance.\\n- Mahalanobis Distance: This metric could adapt to the covariance structure of the data, capturing differences in distributions more effectively than Euclidean distance, particularly in cases with correlated features.\\n- Geodesic Distance: For complex manifolds, geodesic distances (as applied in ISOMAP) could better capture topological structure (as explored in Appendix R).\\n\\nWhile Euclidean distance remains the default in NSA for its simplicity and efficiency, NSA\\u2019s modular design allows for flexibility in adopting alternative metrics, depending on the data\\u2019s structure and specific application requirements.\\n\\n**5. How sensitive is NSA to parameter settings, and what are the best practices for tuning it in different applications (e.g., adversarial robustness vs. dimensionality reduction)?**\\n\\nNSA is robust to a wide range of parameter settings, as demonstrated in our ablation studies in Appendix Q. The key parameters include the weights for local (LNSA) and global (GNSA) components and the k-value in the k-NN graph. For most applications:\\n\\n- **Adversarial Robustness:** Prioritize GNSA (higher weight on global structure) to capture overall structural discrepancies caused by perturbations.\\n\\n- **Dimensionality Reduction:** Balance LNSA and GNSA to preserve both local and global structures. The choice of k is less critical, with a moderate range working well across datasets.\\nOur studies show that NSA achieves competitive performance with default parameter settings, and fine-tuning provides incremental improvements tailored to specific tasks. We recommend starting with default values and adjusting based on the task's emphasis (local vs. global structure).\\n\\n**6. Given the versatility of NSA, do you envision any specific areas where its application would be limited or challenging?**\\n\\nWhile NSA is versatile, its application may be limited in scenarios where:\\n\\n- Functional Similarity: Tasks prioritizing functional rather than structural similarity (e.g., task-specific feature alignment) may require alternative metrics.\\n- Topology-Specific Analysis: Applications requiring explicit topological analysis (e.g., persistent homology) may benefit more from specialized topology-centric methods.\\n\\nThese limitations are inherent trade-offs of NSA\\u2019s design, focusing on structural similarity, and can often be addressed through preprocessing or complementary metrics.\\n\\n\\nWe hope our responses clarified any queries that the reviewer had. Please let us know if you have any more questions.\"}",
"{\"title\": \"Thank you for the excessive rebuttal\", \"comment\": \"First of all, I appreciate the effort the authors put into this extensive rebuttal. I have a few follow-up comments and questions:\\n\\n3. \\n- I do not understand what Figure 11 is showing, could you please provide context?\\n- To me, in Figure 13, the influence of the choice of k is big. I can image this is even more problematic in higher dimensions. I also do not understand you guidance on selecting k (in line 1695). According to this formula, the optimal choice for k in Figure 13 should be 128/100 which is not the case.\\n\\n5. In my opinion, all figures have a too small font size (Fig 1 numbers aren't readable, Fig 2 the y axis numbers, Fig 3 the legend isn't readable, same for Fig 4 and others)\"}",
"{\"title\": \"Result of experiments on Generative Model Evaluation Metrics\", \"comment\": [\"While experimenting with DCA to evaluate specificity and sensitivity tests, we encountered several limitations in its current definition.\", \"DCA and similar generative model evaluation metrics are unable to compute $\\\\text{metric}(A, B)$ when $\\\\text{dim}(A) \\\\neq \\\\text{dim}(B)$. This restriction prevents their use in generating specificity heatmaps, as they can only compute values across corresponding layers (i.e., along the diagonal). The inability to evaluate intermediate embeddings for mismatched dimensionalities severely limits our ability to evaluate their performance against structural similarity metrics.\", \"Furthermore, during our experiments, we observed that the precision and recall values from DCA dropped drastically as the dimensionality of the embeddings increased. For example, while the final layer embeddings (10 dimensions) yielded precision and values around 0.98, the first layer embeddings (4096 dimensions) return a recall and precision of 0. This drastic disparity makes it nearly impossible to generate meaningful sensitivity plots across layers.\", \"Additionally, given the same experimental setup, we found DCA to be significantly inefficient even when using GPUs. Computing the metrics once on a subset of 3000 points in 4096 dimensions takes ~7 minutes. This is magnitudes higher than the compute cost of NSA, which takes only a few seconds for larger subsets and much larger dimensionalities (we present empirical runtimes on 200,000 dimensional data in Appendix H).\", \"The reliance on multiple separate metrics (precision, recall, authenticity etc) rather than a single global similarity measure further complicates the process of comparison and their utility in structural similarity tasks.\", \"These limitations make us believe DCA and other generative model evaluation metrics are best suited to compare output embeddings only and are ill-suited for comparison with structural similarity metrics like NSA, CKA and RTD, which are designed to evaluate and quantify discrepancies across spaces irrespective of dimensionality differences or the specific layer being analyzed.\", \"We hope this satisfies the reviewer's concerns regarding comparison of NSA to these metrics. Please let us know if you have any additional questions.\"]}"
]
} |
1RNSYEEpwi | Stealing User Prompts from Mixture-of-Experts Models | [
"Itay Yona",
"Jamie Hayes",
"Ilia Shumailov",
"Nicholas Carlini"
] | Mixture of Expert (MoE) models improve the efficiency and scalability of dense language models by \emph{routing} each token to a small number of experts in each layer of the model. In this paper, we show how an adversary that can arrange for their queries to appear in the same batch of examples as a victim's queries can exploit expert-choice routing to the full disclosure of a victim's prompt. We successfully demonstrate the effectiveness of this attack on a two-layered Mixtral model. Our results show that we can extract the entire prompt using $\mathcal{O}(\text{Vocabulary size} \times \text{prompt length}^2)$ queries or a maximum of 100 queries per token in the setting we consider. Our work is the first of its kind data reconstruction attack that originates from in a flaw in the model architecture, as opposed to the model parameterization. | [
"Mixture-of-Experts",
"privacy",
"ml-security",
"information security",
"buffer overflow",
"leakage",
"exploit",
"token dropping"
] | Reject | https://openreview.net/pdf?id=1RNSYEEpwi | https://openreview.net/forum?id=1RNSYEEpwi | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"w0w2RxhVYa",
"oLjkJVXLAG",
"bR9XunllkW",
"R6QicZ56kW",
"IaGsspJQ0M",
"DRoaMYU7W1",
"A61x05Y52P",
"9197R7Hve2",
"0S1hV7ifva"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1734967315911,
1732992931316,
1730487052937,
1732573383631,
1732737456819,
1737523845663,
1732574708851,
1730704430571,
1730147526745
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7546/Area_Chair_KiWh"
],
[
"ICLR.cc/2025/Conference/Submission7546/Reviewer_B26R"
],
[
"ICLR.cc/2025/Conference/Submission7546/Reviewer_BQLL"
],
[
"ICLR.cc/2025/Conference/Submission7546/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7546/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7546/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7546/Reviewer_B26R"
],
[
"ICLR.cc/2025/Conference/Submission7546/Reviewer_PLBR"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a new prompt stealing attack for LLMs with MoE architecture. The attack exploits the expert-choice routing mechanism to disclose a victim user's prompt when the attacker's input is batched together with the victim's. This paper is the first to show such a side-channel attack is possible.\\n\\nAll reviewers agree the attack presented in this paper is novel and interesting, and can serve as a cornerstone for future attacks that exploit similar side-channel vulnerabilities. However, reviewers also pointed to several major weaknesses, including limited generality of the attack, excessive query cost, and evaluation lacking depth. AC agrees with the paper's merits and shortcomings and believes the paper has limited impact in its current form. If these weaknesses are addressed, especially if the attack can be expanded to handle more general MoE architectures with reduced query cost, it can be a much more influential paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers and authors discussed the shortcomings, and the authors admitted the above limitations. The paper's decision ultimately depended on whether reviewers believed the techniques presented in the paper have future impact even if they may be unrealistic in their current form.\"}",
"{\"comment\": \"Thank you for your further explanation. I agree that identifying potential risks in LLM systems is valuable work. However, the impact of this approach seems limited, as it is tailored to a less commonly used method\\u2014token drop\\u2014and is not very practical due to its computational expense. The application of strong attacker settings further contributes to these limitations.\\n\\nI decide to keep my score at 5.\"}",
"{\"summary\": \"The paper shows that if someone else's data is placed in the same batch as your data for many consecutive queries, and the model is a 2-layer MoE whose weights you have access to, and you can locally compute a forward pass on the MoE and the KV Cache, and that MoE is using cross-batch Expert-Choice Routing, and the router weights are heavily quantized in order to induce ties, and the MoE is running PyTorch TopK, then you can brute-force (with exponential query complexity) some of the tokens of the other person's query.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Attacking deployments of MoEs is a pretty interesting idea, and stealing the data of other users who are using the inference API is sufficiently high impact that this paper may have some impact even if the threat model and attack are unrealistic / impractical.\\n\\nThe diagrams explained the attack quite well.\", \"weaknesses\": \"The authors acknowledge upfront that their threat model is unrealistic (line 135).\\nI will add some additional reasons why the threat model is unrealistic;\\n\\n- Not all deployed MoEs use Expert Choice Routing. In Expert Choice Routing, typically some tokens may be dropped if they don't go to any expert because that expert is filled. Expert Choice Routing can be very bad in some settings. The alternative is Dropless MoEs, which can be implemented in a couple different ways. I'm not sure which MoEs that are deployed actually use Expert Choice Routing, but if I were to go to an inference provider and ask for Deepseek MoE or DBRX, they would be serving a Dropless MoE. So some kind of table showing \\\"here are the deployed MoEs that use Expert Choice Routing\\\" would be useful. Of course this is closed information in many places, so I don't expect the authors to try and figure out whether Gemini or GPT-4 is using this, but you can at least go to all the inference providers serving open-weights MoEs (because you need open weights MoEs for this attack to work anyways) and see which ones use expert-choice routing. As far as I can tell, it is none of them, but I would want to see this table.\\n- Not all deployed MoEs would use the tie-handling mechanism that the attack relies on exploiting. The only way for a tie to occur is if two tokens have the exact same output from the router. But this does not happen even if those two tokens are actually the same, because over the course of an MoE with multiple layers, the token representations get mixed with other tokens via Attention. The authors note that they quantise the router weights to 5 bits to induce ties (line 377) but even if the router weights were quantised, you would not get ties in a multilayer model. I routed some tokens from Fineweb-CC-2014-03-04 through Mixtral 8x7B, saved the router scores, and found that there are basically no ties. If the authors could release their code that would be helpful to reproduce this tie-breaking behavior, even if it does require quantization.\\n- Some deployed MoEs would use jitter, which also totally messes up the proposed algorithm. Jitter just tries to sample from a slightly perturbed distribution so now we are even less likely to see ties.\\n- Not all deployed MoEs do not use the first-come-first-serve tie-breaking CUDA topk function that the authors assume they are using. For example, xAI's Grok and Gemini do not use this function. This is because the PyTorch TopK function on CUDA is absurdly memory inefficient. TRT, vLLM, etc. use other CUDA kernels for Topk that do not have this issue. Ex, NVIDIA's FasterTransformer uses this https://github.com/NVIDIA/FasterTransformer/blob/main/src/fastertransformer/kernels/sampling_topk_kernels.cu. \\n- Deployed MoEs typically do not have open weights. Even if we consider an inference provider running Pytorch on CUDA to serve an open-source MoE like Deepseekv2 such as Fireworks, the inference provider's KV Cache compression mechanism (anyone serving a model is not storing the full KV Cache, they are doing something like MLA, or sparse KV Cache, or quantized, or pruned, etc etc etc) is not publicly known. And this is required for the adversary to run this attack, because the adversary needs the KV Cache locally in the same way that the model is being inferenced on the cloud.\\n- If the adversary can run an open-weights MoE like Deepseek-v2 locally for many thousands of queries, they are operating with a massive amount of computational power. Furthermore, this attack needs the victim's data to also be present in the same batch for many queries.\\n\\nThe authors do not spend enough time proposing defenses; the paragraph starting on (line 484) should be expanded into a subsection. The authors had some ~30 lines remaining so it's not a matter of space constraints.\\n\\nThe main text of the paper is pretty much incomplete. There are too many places where the reader is forced to scroll to the Appendix and read a chunk of text in order to follow the paper. This is unfortunately becoming a common practice, but I dislike it nonetheless.\\n\\nThe confidence intervals seem way too large in Figure 4. It looks like all these attacks could just have 0 success rate. And this is even in the super unrealistic setting where the canaries are taking on a few values, the vocab is <10k (Gemma has vocab 256k), the model is artificially altered to make the attack work at all.\\n\\nThe attack is pretty unsophisticated. If I had to draw a comparison, I would say that this is like the brute-force binary search attacked proposed to extract logprobs by exploiting logit bias as proposed by Morris 2023. It's straightforward and if you don't care about efficiency it's fine, but it's not going to make an attack paper on its own. What can the community learn from the development from this attack? It has no practical implications, so there should be something about the design that is clever or inspires new ideas.\\n\\nThere are some minor typos (line 496) (line 837) (line 342) (line 819) (line 820)\", \"questions\": \"n/a\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you very much for the feedback on the paper.\\n\\n> As acknowledged in the submission, the setting is unrealistic. The adversary needs to (1) control the placement of target inputs in the batch, (2) repeatedly submit different orderings of the same batch to the model, and (3) observe its internal routing choices. Man-in-the-middle (mention in 3.1) might be able to do (1) -- although not entirely clear how -- but not (2) or (3). I cannot think of any setting where (2) and (3) are available to the adversary, yet the adversary is unable to directly observe inputs into the model.\\n\\nWe view our paper is an important proof-of-concept that model architectural decisions can have privacy impact on user submitted data. We argue in Section 6 that not only do we expect the attack to get better, but we also suspect that other flavours of MoE routing strategies can be exploitable. \\n\\n> Evaluation is rudimentary, just a single Mixtral model. I understand this is a proof-of-concept, but seems a little skimpy for a conference submission.\\n\\nSince our current attack algorithm is general (has precise complexity bound) and exploits specifics of the routing algorithm, we are not sure if further models would change the paper narrative at all. Importantly, complexity grows with the number of layers due to the logit matching phase of the attack and less with the choice of specific experts. We evaluated the attack with randomly initialised experts and saw no major change in hardness of blocking the experts. \\n\\n> Just a single routing strategy is investigated. I do believe that other routing strategies may be similarly vulnerable, but again, seems skimpy for a conference submission.\\n\\nWe disagree with the reviewer on this point. Getting the a general attack algorithm to work and identifying the exact threat model to make it feasible was not at all a trivial effort that took many months of hacking and exploration. We also want to note that to make the attack work not only did we have to model the routing algorithm itself, but we also needed to debug deep specifics of how tie handling operates on accelerators in practice. We would expect that exploiting other strategies and optimising the one presented in our manuscript would take months if not years of effort; while at the same time the simple fact uncovered by our attack will remain -- MoE with ECR is vulnerable and leaks user data bit by bit. \\n\\n> Defences are not really explored in any depth. Randomizing top-k and/or token dropping (or other aspects) should mitigate the attack, but would it have a noticeable impact on performance / quality of the results?\\n\\nWe want to note that adding random noise of magnitudes considered here will not disrupt the model performance almost at all, since at present the attack requires extreme precision to exploit the tie handling.\"}",
"{\"comment\": \"Thank you very much for the feedback on the paper.\\n\\n> Could you please further discuss about how man-in-the-middle attacks can help to inject the proposed attack in LLM server?\\n\\nThank you for raising this point. We agree that discussing the feasibility of the attack in real-world scenarios is important. In MITM scenarios an attacker can have more control of the user interaction with a server [1], even though the communication is encrypted and thus not visible to the attacker. In such settings one can carry our attack more realistically, as the requirement of controlling the positions in batch or forcing the user to send the same secret message repeatedly. We will add a section to illustrate that.\\n\\n> Could you discuss what will happen if there are two tokens sharing the same routing path.\\n\\nThank you for this insightful question. Our exploit is designed to handle cases where two tokens share the same routing path. We ensure a unique signal is generated only when our guesses for both the token's identity and its position within the target expert buffer are correct. This allows us to differentiate between tokens even if they follow identical routing paths. We will clarify this mechanism further in the revised version.\\n\\n> The threat model assumes an attacker with significant control over the LLM server, which may not be practical in real-world settings. Additionally, token-dropping techniques are not widely used in recent LLM inference architectures, limiting the relevance of the attack to current models.\\n\\nWe appreciate your feedback on the threat model. You're right that the current attack assumes a strong attacker, and token-dropping may not be prevalent in current LLMs. However, we believe it's crucial to proactively identify and address potential vulnerabilities, even if they are not immediately exploitable. Our work aims to raise awareness about the risks associated with certain design choices in LLMs and encourage the development of secure and robust architectures. This is particularly relevant for future LLMs and evolving inference techniques, where token-dropping or similar mechanisms might be employed. By highlighting these vulnerabilities, we hope to inform the design and implementation of secure LLMs, even if the specific attack demonstrated here has limitations in current real-world settings.\\n\\n> The attack is computationally intensive, requiring up to 1,000 tokens for each token being extracted, which may restrict its feasibility in large-scale applications.\\n\\nWe acknowledge the computational limitations of the current attack. As you pointed out, the attack's complexity could hinder its feasibility in large-scale applications. However, we believe the core findings of our research remain valuable. Our work highlights a previously unknown vulnerability that could have significant implications for the security and privacy of LLMs. This information is crucial for LLM architects and developers, particularly those working with Trusted Execution Environments, as it underscores the need for robust security measures to protect user data, even within batched processing environments. We hope our findings will stimulate further research into more efficient attack strategies and mitigation techniques.\\n\\n> The explanation of the proposed method for Recovering Target Token Routing Path lacks clarity. It is unclear how the method handles cases where two tokens share the same routing path. If two tokens follow identical paths, this could complicate the attack, as distinguishing between them based on routing alone may not be difficult.\\n\\nThank you for your feedback on the clarity of our method. We will revise the explanation of the 'Recovering Target Token Routing Path' method in the paper to provide a more comprehensive and clear description. As mentioned earlier, our attack can successfully distinguish between tokens sharing the same routing path by relying on the unique signal generated only when both the token's identity and position are correctly guessed. We will ensure this aspect is clearly conveyed in the revised manuscript.\\n\\n[1] https://docs.google.com/presentation/d/11eBmGiHbYcHR9gL5nDyZChu_-lCa2GizeuOfaLU2HOU/edit#slide=id.g1d134dff_1_222\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you very much for the feedback on the paper.\\n\\n> The authors acknowledge upfront that their threat model is unrealistic (line 135). I will add some additional reasons why the threat model is unrealistic; ...\\n\\nThank you very much for all of the above points, they are extremely useful for setting the scene on deployment practices. We agree fully that at present the attack is unrealistic and we emphasize this in the paper. We will add a section to the appendix to expand on the above points and also add a note to the existing Section 6 with additional requirements for the future attacks to become realistic. We once again thank the reviewer for this.\\n\\n> The authors do not spend enough time proposing defenses; the paragraph starting on (line 484) should be expanded into a subsection. The authors had some ~30 lines remaining so it's not a matter of space constraints.\\n\\nMany thanks for this. We will add more to the defenses section. Currently to defend one really needs to add minor amount of noise, since the current attack requires extreme precision to work and errors compound making extraction of later letters harder. \\n\\n> The main text of the paper is pretty much incomplete. There are too many places where the reader is forced to scroll to the Appendix and read a chunk of text in order to follow the paper. This is unfortunately becoming a common practice, but I dislike it nonetheless.\\n\\nWe agree with the reviewer, but we could not find another way to overcome it, since there is quite a bit of complexity to describe how and why the attack works. We will iterate over the manuscript to improve its readability. \\n\\n> The confidence intervals seem way too large in Figure 4. It looks like all these attacks could just have 0 success rate. And this is even in the super unrealistic setting where the canaries are taking on a few values, the vocab is <10k (Gemma has vocab 256k), the model is artificially altered to make the attack work at all.\\n\\nIndeed. That was due to the quirks of the hyperparameters chosen for this particular evaluation run. After a minor change to the evaluation parameters we reduced the variance in performance and now the attack works in almost 100% of cases. \\n\\n> The attack is pretty unsophisticated. \\n\\nAlthough we agree with the reviewer that the final attack algorithm is unsophisticated, we want to stress that making it work was not at all easy. Getting the general attack algorithm to work and identifying the exact threat model to make it feasible was not at all a trivial effort that took many months of hacking and exploration. We also want to note that to make the attack work not only did we have to model the routing algorithm itself, but we also needed to debug deep specifics of how tie handling operates on accelerators in practice. \\n\\n> What can the community learn from the development from this attack? It has no practical implications, so there should be something about the design that is clever or inspires new ideas.\\n\\nWe view our paper is an important proof-of-concept that model architectural decisions can have privacy impact on user submitted data. We argue in Section 6 that not only do we expect the attack to get better, but we also suspect that other flavours of MoE routing strategies can be exploitable. Our work is an example to a simple fact -- *MoE with ECR is vulnerable and leaks user data bit by bit*. \\n\\nNote that you could make the same comments about Spectre or Meltdown attacks (ie simple, unsophisticated, leaks a single bit), yet it had a major impact in how speculation is performed on our everyday computers; and it was in fact used to perform real-world impacting attacks. Our work shows a novel kind of vulnerability; stronger attacks will follow. \\n\\n> There are some minor typos (line 496) (line 837) (line 342) (line 819) (line 820)\\nThank you very much for this. All fixed now.\"}",
"{\"summary\": \"This paper explores a novel security vulnerability in Mixture-of-Experts (MoE) language models, specifically focusing on the risk of prompt leakage through the architecture's routing mechanisms.The proposed attack, an adversary manipulates expert buffers within an MoE model to extract a victim's prompt by observing how token routing and dropping affect model outputs. The study reveals that an attacker can reconstruct a user\\u2019s prompt by exploiting token-dropping patterns and guessing tokens sequentially.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The study introduces a novel security concern by identifying a previously unexamined vulnerability in LLM service.\", \"Experimental results demonstrate the effectiveness of the proposed attack, showing that it reliably extracts user prompts under the specified conditions.\"], \"weaknesses\": [\"The threat model assumes an attacker with significant control over the LLM server, which may not be practical in real-world settings. Additionally, token-dropping techniques are not widely used in recent LLM inference architectures, limiting the relevance of the attack to current models.\", \"The attack is computationally intensive, requiring up to 1,000 tokens for each token being extracted, which may restrict its feasibility in large-scale applications.\", \"The explanation of the proposed method for Recovering Target Token Routing Path lacks clarity. It is unclear how the method handles cases where two tokens share the same routing path. If two tokens follow identical paths, this could complicate the attack, as distinguishing between them based on routing alone may not be difficult.\"], \"questions\": [\"Could you please further discuss about how man-in-the-middle attacks can help to inject the proposed attack in LLM server?\", \"Could you discuss what will happen if there are two tokens sharing the same routing path.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"In MoE models, individual experts process tokens in priority order; tokens with the same priority are processed in the arrival order (because of a CUDA quirk). If the buffer is almost full, the second-to-arrive token is dropped. This is a side channel: if an adversary can control the relative placement of their own and someone else's tokens in a batch, they can first fill the buffer with high-priority tokens, then switch the order between their own token and someone else's unknown token, and observe the resulting routings. If the routing is the same for both tokens, this means the adversary's token is the same as the unknown token, revealing the value of the latter. With repeated application, this can be leveraged into an extraction attack.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Data-dependent computations are vulnerable to side-channel leakage: designers of ML systems need to learn this lesson.\", \"Cool exploitation of an interesting side channel in a particular MoE architecture (+ the top-k implementation in CUDA).\", \"History of computer security suggests that even seemingly impractical side channels can turn into exploitable vulnerabilities (with lots of additional research, of course).\"], \"weaknesses\": [\"As acknowledged in the submission, the setting is unrealistic. The adversary needs to (1) control the placement of target inputs in the batch, (2) repeatedly submit different orderings of the same batch to the model, and (3) observe its internal routing choices. Man-in-the-middle (mention in 3.1) might be able to do (1) -- although not entirely clear how -- but not (2) or (3). I cannot think of any setting where (2) and (3) are available to the adversary, yet the adversary is unable to directly observe inputs into the model.\", \"Evaluation is rudimentary, just a single Mixtral model. I understand this is a proof-of-concept, but seems a little skimpy for a conference submission.\", \"Just a single routing strategy is investigated. I do believe that other routing strategies may be similarly vulnerable, but again, seems skimpy for a conference submission.\", \"Defences are not really explored in any depth. Randomizing top-k and/or token dropping (or other aspects) should mitigate the attack, but would it have a noticeable impact on performance / quality of the results?\"], \"questions\": \"The paper seems premature in its current form, but I would advocate for it if a meaningful subset of the weaknesses were addressed. It would require a much more substantial evaluation, though.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1RC3KtP1jT | Archilles' Heel in Semi-open LLMs: Hiding Bottom against Recovery Attacks | [
"Hanbo Huang",
"Yihan Li",
"Bowen Jiang",
"Ruoyu Sun",
"Lin Liu",
"Zhuotao Liu",
"Bo Jiang",
"Shiyu Liang"
] | To address privacy concerns with large language models, industrial users request local fine-tuning and deployment, but unencrypted models risk theft. Hardware-based security provides protection but is constrained by secure memory, leading to semi-open configurations. Semi-open models balance security and customization by keeping key layers closed-source within a secure environment while allowing others to be fine-tuned, but closed-source layers are susceptible to recovery attacks. In this paper, we explore the design of semi-open models with fewer closed-source layers, aiming to increase customizability while ensuring resilience to recovery attacks. We analyze the contribution of closed-source layer to the overall resilience and theoretically prove that in a deep transformer-based model, there exists a transition layer such that even small recovery errors in layers before this layer can lead to recovery failure. Building on this, we propose \textbf{SCARA}, a novel approach that keeps only a few bottom layers as closed-source. SCARA employs a fine-tuning-free metric to estimate the maximum number of layers that can be publicly accessible for customization. We apply it to five models (1.3B to 70B parameters) to construct semi-open models, validating their customizability on six downstream tasks and assessing their resilience against various recovery attacks on sixteen benchmarks. We compare SCARA to baselines and observe that it generally improves downstream customization performance and offers similar resilience with over \textbf{10} times fewer closed-source parameters. We empirically investigate the transition phenomenon and analyze the effectiveness and limitations of our scheme. | [
"Semi-open Model",
"Closed-sourcing Approach"
] | https://openreview.net/pdf?id=1RC3KtP1jT | https://openreview.net/forum?id=1RC3KtP1jT | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zQeaDh2EFd",
"zFsQIrTWem",
"z1aAxGKsAV",
"wlNQzkV3Ot",
"vH58jUzcuM",
"uO50Q07VL3",
"tRPodDapZK",
"swYmUgSIY5",
"rdkQxjzOaS",
"rPUs93cCke",
"qvc5QcmuDL",
"pqzDhACiju",
"mGcRGEuQNh",
"lfIPpE9rGu",
"jxH4lT2DWf",
"f47T262GX4",
"cpXxxez6ag",
"cQZ5W9pEhP",
"YlcOt0YTH3",
"XE9UbpBLo1",
"W74ev49dkR",
"Vih0dZX9Vi",
"VIooOttLyj",
"To58ecRKIk",
"T5TXTCKXDs",
"S2rRVjM4pd",
"QTWeWMSgDd",
"PsiNWkhWe7",
"P3so8JGns4",
"MfBig48oGd",
"J4dIFDPkM3",
"H5xHM3kveo",
"CiN7tiN2mK",
"CAKhVgh5Xr",
"9Ct9VQrX1j",
"7nU0ydGeXf",
"6stztogaF5",
"4rEuL7ZnDa",
"2hOfC3vgLH"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1731959422762,
1733294218060,
1732857428627,
1733036240783,
1730754522390,
1732071712588,
1732723735312,
1732071747074,
1732069874560,
1733172469258,
1731914761260,
1732250155774,
1731914465568,
1733036002621,
1733206715367,
1731913681456,
1732724801771,
1731914733603,
1733226204769,
1732857379069,
1730776422699,
1731913051269,
1732857234488,
1732258486064,
1733035957177,
1729814123666,
1730674602024,
1734354497579,
1732001021675,
1731912582032,
1732724483849,
1732730653350,
1732724450080,
1732724403235,
1732507903975,
1732207794868,
1732000966680,
1731913972482,
1731913491320
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_kKBu"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_kKBu"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_ag9Z"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_1rG5"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_ag9Z"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_d5Gh"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_ag9Z"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_1rG5"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_kKBu"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_1rG5"
],
[
"ICLR.cc/2025/Conference/Submission467/Reviewer_d5Gh"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
],
[
"ICLR.cc/2025/Conference/Submission467/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"I have read the other reviews and the author's responses and still strongly recommend rejecting the paper. To summarize my review: there are no semi-open LLMs, and there are no attacks that can steal these models if they did exist, therefore because the authors are operating in a fictitious setting there is no strong prior work in this domain. If anyone cared about this setting, they would immediately arrive at the trivial insights that constitute the entirety of this paper.\\n\\n> Semi-Open LLMs exist\\n\\nYour provided references for this are End-to-end systems that have a closed-source embedding model as one component of the pipeline. Nothing like what you are trying to attack/defend, an LLM that actually has some number of the layers open and some number of the layers closed, exists. \\n\\n> Threat model\\n\\nStating that the closed-source component acts as a black-box embedding layer does not actually make it an embedding layer. The SVD attack of Carlini et al. only works when there is exactly 1 layer working to take the inputs from representation space to vocabulary space. So there is no analogue here. There is still no evidence that the attack you are studying is a realistic threat. \\n\\n> The paper's insight is trivial\\n\\nThe existence of a transition layer follows immediately from the straightforward observation of compounding error at each layer. None of the techniques in the paper are novel. \\n\\n> The method just turns the insight into a metric\\n\\nStating that it is non-trivial to optimize a non-differentiable metric, and then proceeding to say that you just take a Taylor expansion, does not convince me that you did anything non-trivial here. The final metric is neither computationally efficient nor theoretically principled. It is just \\\"what happened if I closed off the first N layers of the model\\\". \\n\\n> The evaluation is not fair\\n\\nThe closed part of your model does not function similarly to an embedding model, so SEM likely is not a fair baseline here.\"}",
"{\"comment\": \"Dear Reviewer ag9Z:\\n\\nWe sincerely appreciate your kind and positive response. We are truly grateful for your recognition and support of our work and contributions. Furthermore, we deeply value the effort you have dedicated during the rebuttal process. Thank you once again.\\n\\nBest Regards,\\n\\nAuthors\"}",
"{\"title\": \"Official Response to Reviewer kKBu by Authors (3/3)\", \"comment\": \"**Q5. Embedding models already address the problem we studied.**\\n\\nThank you for your valuable feedback on our proposed method. We greatly appreciate the opportunity to further clarify and elaborate on the contributions of our work.\\n\\nWe acknowledge the reviewer\\u2019s point (kKBu 1rG5) that embedding models have already implemented aspects of what we propose, demonstrating that our design has been widely deployed in online scenarios, which we agree is a more trivial case. However, in the context of on-premises deployment of LLMs, it is not necessary for vendors to protect the model from the very first layer. Instead, vendors have the flexibility to secure any layer of the model and store it in a TEE or TDX. Thus, our contribution is twofold. First, we provide theoretical evidence showing that the design widely adopted in online scenarios can effectively balance model theft risks and customization performance. To the best of our knowledge, this has not been explored before. Second, we determine how many layers should be protected in the TEE to offer sufficient defense against recovery attacks. We would like to note that directly determining the optimal number of layers through recovery attacks would require time-consuming fine-tuning, while our method is significantly more efficient, and this approach has not been studied previously.\\n\\n\\n\\n**Thanks for your kind and helpful comments. We hope our responses could address your concerns and raise your confidence and we are looking forward to discussing with you to further improve our work!**\"}",
"{\"title\": \"Humbly Seeking Further Discussion\", \"comment\": \"Dear Reviewer ag9Z,\\n\\nWe kindly note that the author-reviewer discussion period is currently ongoing and approaching its conclusion. We would be grateful if you could review our latest general response (General Response 2) at your earliest convenience, where we have provided additional clarification regarding our **motivation**, particularly with respect to the **\\u201csemi-open model.\\u201d** We sincerely hope that this will futher clarify the contribution of our manuscript and address your concerns. We humbly request that you reconsider our manuscript and, if appropriate, kindly revise your score. Thank you very much for your time and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"summary\": \"The paper proposes a method for identifying which layers in the model that, if recovered by an adversary in an iterative layer recovery process, will make subsequent layers easier to recover.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The evaluation seems comprehensive. It was easy to follow the problem setup and the method.\", \"weaknesses\": \"I question the threat model that the authors are introducing. I don't think there's any chance of stealing an LLM through any method that exists no matter how semi-closed/semi-open it is. The only methods that have been proposed that can do something like this specifically target the embedding layer.\\n\\nIt seems like the main insight of the paper, that hiding the earlier layers in the model is more impactful than hiding later layers because if an attacker wants to recover the model they'll pay an accuracy error scaling in the depth of the model past the layer they haven't yet recovered, is trivial. If you asked someone who had never heard anything about this literature of hiding layers, whether they should hide the first block or the last block, I'm certain everyone would choose to hide the first block. There's plenty of work already showing that later layers are more or less redundant and don't learn anything new. This is because attention heads in block N have the ability to learn Nth order interactions, but for N > 2, these interactions typically don't get learned and the attention heads just degenerate [1].\\n\\nThe actual implementation of the method is not sophisticated. It just takes this straightforward insight and turns it into a metric. But that metric is itself just \\\"what happened if I closed the first N layers of the model\\\" and then returns the first one that passes some threshold of difficulty.\\n\\nIt doesn't seem like the evaluation is really fair. The authors evaluate against SEM. But SEM just wants to recover the embedding and the authors are trying to show what happens if they hide the early parts of the network. This seems like an indication that this isn't a particularly realistic threat model.\\n\\n[1] https://arxiv.org/abs/2404.08634\", \"questions\": \"n/a\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer d5Gh:\\n\\nKindly note that the author-reviewer discussion period is currently ongoing. We would greatly appreciate it if you could review our response when convenient. We earnestly request that you reconsider our manuscript and consider upgrading your score.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"title\": \"General Response 2 by Authors: More Elaboration on Motivation\", \"comment\": \"We sincerely thank all reviewers for their constructive comments on the semi-open paradigm. We would also like to take this opportunity to further elaborate on the scenarios where semi-open models are widely deployed.\\n\\nMany vendors now offer large language models (LLMs) that come with high training costs but deliver exceptional performance across various industries [1-2]. Industrial users, such as healthcare organizations, financial institutions, and government agencies, often request these models to be customized with private data, fine-tuned, and deployed locally in on-premises environments [3-5]. However, deploying models without encryption exposes their architecture and parameters, making them susceptible to model stealing attacks that retrieve parameters from CPU, RAM, and vRAM, posing significant risks to intellectual property [5-9]. To mitigate these risks, vendors use hardware-based security techniques like Trusted Execution Environments (TEEs) and encrypted inference, which conceal the model weights and restrict unauthorized access during inference [10-11]\\n\\nDespite their benefits, TEEs have significant limitations, such as restricted secure memory (e.g., 128MB\\u2013256MB in Intel SGX)[7,14], which is insufficient for the gigabytes of memory required by large language models [12,14,15]. As a result, only a portion of the model's layers can be secured within the environment, leaving the remainder exposed. This creates a **semi-open** model setting that balances the need for security and customization.\\n\\nHowever, this semi-open approach introduces a tradeoff. Securing or \\\"**closed-sourcing**\\\" more parameters limits the scope of fine-tuning on private data, reducing the model's ability to adapt to specific needs. Conversely, exposing or \\\"**open-sourcing**\\\" more parameters increases vulnerability to model extraction or distillation attacks, where attackers can query the closed-source module, construct input-output pairs, and train a mimic model that replicates its functionality. In this paper, we address this challenge by exploring methods to determine which parts of the model should be closed-sourced in a secure environment, enabling effective fine-tuning while safeguarding against extraction risks.\\n\\nWe have revised the abstract and introduction and hope this updated explanation effectively clarifies our approach and addresses the concerns about the motivation for designing semi-open models.\\n\\n[1] https://openai.com/index/hello-gpt-4o/\\n\\n[2] GPT-4 Technical Report https://arxiv.org/pdf/2303.08774\\n\\n[3] How to run LLMs locally: Hardware, tools and best practices https://www.techtarget.com/searchEnterpriseAI/tip/How-to-run-LLMs-locally-Hardware-tools-and-best-practices\\n\\n[4] Locally Run Large Language Models May Help Preserve Patient Privacy https://www.techtarget.com/healthtechanalytics/news/366590151/Locally-Run-Large-Language-Models-May-Help-Preserve-Patient-Privacy\\n\\n[5] Securing AI Model Weights https://www.rand.org/pubs/research_reports/RRA2849-1.html\\n\\n[6] Deepsniffer: A dnn model extraction framework based on learning architectural hints https://dl.acm.org/doi/pdf/10.1145/3373376.3378460\\n\\n[7] SoK: All You Need to Know About On-Device ML Model Extraction - The Gap Between Research and Practice https://www.usenix.org/system/files/usenixsecurity24-nayan.pdf\\n\\n[8] DNN model architecture fingerprinting attack on CPU-GPU edge devices. https://ieeexplore.ieee.org/abstract/document/9797366\\n\\n[9] Deepsteal: Advanced model extractions leveraging efficient weight stealing in memories https://arxiv.org/pdf/2111.04625\\n\\n[10] Shadownet: A secure and efficient on-device model inference system for convolutional neural networks. https://arxiv.org/pdf/2011.05905\\n\\n[11] No privacy left outside: On the (in-) security of tee-shielded dnn partition for on-device ml. https://arxiv.org/pdf/2310.07152\\n\\n[12] CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment. https://arxiv.org/pdf/2410.13903\\n\\n[13] Open-Source Solutions for Running LLMs Offline: Benefits, Pros and Cons, and Should You Do It? Is it the Time to Have Your Own Skynet? https://medevel.com/running-llms-offline-benefits-should-you-do-it-1300/\\n\\n[14] A Fast, Performant, Secure Distributed Training Framework For LLM. https://ieeexplore.ieee.org/abstract/document/10446717\\n\\n[15] TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models. https://arxiv.org/pdf/2411.09945\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer 1rG5:\\n\\nKindly note that the author-reviewer discussion period is currently ongoing. We would greatly appreciate it if you could review our response when convenient. We earnestly request that you reconsider our manuscript and consider upgrading your score.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"comment\": \"Thank you for the detailed and comprehensive responses, which have clarified my concerns and provided a better understanding of SCARA.\\n\\nAfter reviewing other feedback and the authors\\u2019 responses, I like the thoughtful revisions, which have notably improved the paper.\\n(1) The paper effectively highlights the rise of semi-open models, where closed-source embedding models integrate with open-source modules. In your response to Reviewer KKBu, the distinction between 'semi-open pipeline' and 'semi-open model' is not crucial. What matters is the understanding and application of the concept.\\n\\n(2) SCARA addresses performance challenges in semi-open models by partitioning pretrained LLMs into closed-source and open-source components. This design balances customizability with enhanced resilience to recovery attacks, benefiting both vendors and users.\\n\\n(3)According to the general response, the threat model discussed is well-recognized within LLM communities. As highlighted in your paper, there is substantial research on model recovery and model extraction attacks, indicating a significant interest and concern in these areas. \\n\\n(4) I agree with Reviewers d5Gh and 1rG5, the analysis of a transition layer is particularly compelling and may have broad implicatio\\nns for the research community. This insight helps us estimate the minimal number of hidden layers required without fine-tuning.\\n\\nThis paper provides a well-structured, insightful contribution of broad interest. The SCARA framework balances customizability with resilience to recovery attacks, enhancing semi-open models. I recommend to accept.\"}",
"{\"comment\": \"Dear Authors,\\n\\nI did read the general response. I agree with Reviewer kKBu's opinion here --- also, I have provided a similar point earlier: \\n> By \\\"embedding models,\\\" they inherently already choose to close-source the first multiple layers of these models --- that the paper suggests doing. And for these embedding models, there are no such things as open-sourcing the later layers of the model. Moreover, it anyway makes no sense to only open-source the later layers of a model in any existing settings. \\n\\nEmbedding models are also by no means for defending model-stealing attacks \\u2014 they usually just output the embeddings of the last layer of the model.\\n\\nOverall, as many other reviewers suggested --- the threat model in this paper is not convincing. This impression still holds, even given the authors' arguments.\"}",
"{\"title\": \"Response to Reviewer ag9Z\", \"comment\": \"**Q6: Could you explain more about potential future work that could be included in the paper?**\\n\\n**R6:** Thank you for your comment. Our current work focuses on constructing Semi-open models to prevent recovery of general abilities but does not explore protecting specific domain abilities. Additionally, our method does not yet accurately identify transition layers or determine the optimal number of parameters to protect. Future work will address these limitations by focusing on specific domain protection, identifying transition layers, and optimizing parameter protection strategies.\\n\\n\\n\\n**Q7: Could the authors clarify what the value 0.00 represents in Table 1 and Table 2?**\\n\\n**R7:** Thank you for your insightful comment. In Tables 1 and 2, the value \\\"0.00\\\" signifies that the model's performance on the corresponding benchmark or domain is entirely lost, achieving a score of 0 on a scale of 0 to 100. Following your valuable feedback, we have further clarified the meaning of these values in the captions to ensure better understanding.\\n\\n\\n\\n**Q8: The authors discussed the impact of datasets of different lengths on the effectiveness of SCARA in the experimental section, but these datasets did not appear in the setup. Could the authors provide a detailed introduction to the composition of these datasets?**\\n\\n**R8:** Thank you for your insightful comment. The extensive datasets used to evaluate SCARA\\u2019s effectiveness are extensions of the 51K attack dataset. Specifically, we construct the 100K, 200K, 300K, and 500K datasets incorporating additional sources, including Baize (158K multi-turn conversations from ChatGPT self-chat), MathInstruct (260K curated mathematical instruction instances), and OpenOrca (1M GPT-4 completions and 3.2M GPT-3.5 completions). These supplementary datasets enhance the attack by supporting complex tasks and providing broader topic coverage. For further details, please refer to Section 5.1 and Appendix B.2 of the paper. Following your suggestion, we have revised the manuscript to clarify the datasets used in our analysis.\\n\\n\\n\\n\\n\\n**Thanks for your kind and helpful comments and we are looking forward to discussing with you to further improve our paper!**\"}",
"{\"title\": \"Official Response to Reviewer d5Gh\", \"comment\": \"Thank you for your thoughtful comments and for taking the time to review the rebuttal. Regarding your concern about the motivation for the \\\"semi-open\\\" or \\\"grey-box\\\" model, I would like to further clarify our research intentions and their significance.\\n\\nAs OpenAI noted in its response to NTIA\\u2019s (National Telecommunications and Information Administration) request regarding LLMs with open weights: \\u201cWhen we did not observe significant misuse effects, this gave us the confidence to openly release the full model weights\\u201d [1]. This highlights that the risk of misuse remains a significant concern, prompting many LLM vendors to prefer releasing their models via black-box APIs to better manage and mitigate these risks. At the same time, many companies, like OpenAI, recognize that this approach unfortunately limits LLMs' downstream customizability [2]. Therefore, it is crucial to maintain control over the model and decide whether it could be useful for any malicious purposes. This perspective has inspired us to consider the model as a pipeline\\u2014keeping part of it closed-source and proprietary while releasing other parts to enable downstream customization.\\n\\nTo the best of our knowledge, only a few prior studies have addressed these challenges. Our work introduces a pioneering paradigm that employs a selective closed-sourcing approach for LLMs, wherein only the key early layers are closed-source, while most layers remain publicly accessible. This forward-thinking solution demonstrates the ability to simultaneously ensure control over the model and enhance customization for downstream tasks, encouraging LLM vendors to adopt greater openness. We take pride in being the first to propose such a framework, which not only safeguards vendor interests but also empowers downstream users with greater customization capabilities. This approach allows researchers broader access to powerful models, enabling advancements in areas such as academic research and creative modifications for downstream performance across various domains. We believe this will foster meaningful discussions and encourage LLM vendors to open more parts of their advanced models, thereby promoting innovation and development in the AI community.\\n\\nThank you again for your valuable feedback, which is instrumental in refining this work. We are hopeful that the additional clarifications provided might encourage a reconsideration of the score, reflecting the innovative potential of our work to positively influence the AI community.\\n\\n[1] OpenAI\\u2019s comment to the NTIA on open model weights\", \"https\": \"//www.fluid.ai/blog/open-source-llm-vs-closed-source-llm-for-enterprise-use-cases\"}",
"{\"title\": \"Response to Reviewer 1rG5\", \"comment\": \"Thanks for your time and valuable comments. We are pleased for your positive comments. In the response below, we provide answers to your questions in order to address the concerns and increase your confidence.\\n\\n**Q1:** **Why the semi-open models are practically relevant?**\\n\\n**R1:** Thank you for your positive, constructive and valuable comments. We appreciate the reviewer\\u2019s feedback on the concern of semi-open models' practically relevant. We outlined the advantage of semi-open models in the general response section. We hope our response provides a clearer understanding of the motivations behind our approach.\\n\\n\\n\\n**Q2: Threat models are unclear.**\\n\\n**R2\\uff1a** Thank you for your insightful feedback. We have detailed the threat model in general response **R2** and further refined Section 3.1. Your suggestions have greatly improved the clarity of our paper.\\n\\n\\n\\n**Thanks for your kind and helpful comments and we are looking forward to discussing with you to further improve our paper!**\"}",
"{\"title\": \"Humbly Seeking Further Discussion\", \"comment\": \"Dear Reviewer 1rG5,\\n\\nKindly note that the author-reviewer discussion period is currently ongoing and is approaching its conclusion. We would greatly appreciate it if you could review our latest general response (General Response 2) at your earliest convenience, where we have provided additional clarification on our **motivation**, particularly regarding the **\\u201csemi-open model\\u201d**. We sincerely hope this can address your concerns. We humbly request that you reconsider our manuscript and, if appropriate, consider upgrading your score. Thank you for your time and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewer 1rG5:\\n\\nWe sincerely thank you for your time and valuable comments. We would like to take this opportunity to clarify the deployment scenarios of semi-open models and address your concerns regarding our threat model.\\n\\nWe acknowledge your point that framing the closed-source module as an embedding model may trivialize the problem, as embedding models typically start from the first layer and do not involve partial public exposure. To reduce the misunderstandings of the novelty and significance of our contribution, we have removed all references to embedding models in the revised manuscript.\\n\\nFurthermore, we believe that discussing semi-open model design in the context of on-premises deployment is more appropriate. In local deployment scenarios, vendors face the challenge of balancing flexibility with the risk of model theft [1-7]. Due to the limited memory and processing speed of hardware security methods, the semi-open model approach has been widely explored [1-3,6-7]. For instance, a 2020 MobiSys paper (with 195 citations) [2] states: *\\u201cDue to the limited memory of the edge device\\u2019s TEE, we partition model layers into more sensitive layers (executed inside the device\\u2019s TEE) and layers executed in the untrusted part of the operating system.\\u201d* More recent work by Song Han [6] further explores the balance between model customization flexibility and security, as stated in the abstract: *\\u201cIn this paper, we propose Offsite-Tuning, a privacy-preserving and efficient transfer learning framework that adapts billion-parameter foundation models to downstream data without access to the full model.\\u201d* However, despite these efforts, no principled theory exists to determine which layers should be concealed for optimal security and customization. To the best of our knowledge, our work is the first to theoretically address this gap. We have also revised our manuscript to clarify our motivation.\\n\\nRegarding our threat model, model distillation and recovery attacks have been widely studied in local deployment, where attackers query the hidden module and attempt to replicate its functionality by training a substitute model. This attack strategy has been discussed in recent top-tier security conferences, such as Hai \\\"Helen\\\" Li\\u2019s paper [8] (USENIX '24), which states: \\u201c*Query-based model extraction attacks aim at learning a substitute model with the predictions returned by a black-box target model*\\u201d.\\n\\nIn this paper, we first theorectically prove the existance of transition layer in LLMs, demonstrating that hiding layers before this transition enhances resistance to recovery attacks. Based on this, we propose SCARA, an effective and efficient method that identifying only a few bottom layers to conceal with a fine-tuning-free metric, enabling effective fine-tuning while safeguarding against recovery risks.\\n\\nThank you again for your constructive and helpful comments. We hope our response can addresses your concerns.\\n\\nBest Regards,\\n\\nAuthors\\n\\n\\n\\n[1] TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models. https://arxiv.org/pdf/2411.09945\\n\\n[2] Darknetz: towards model privacy at the edge using trusted execution environments http://arxiv.org/abs/2004.05703\\n\\n[3] Securing AI Model Weights https://www.rand.org/pubs/research_reports/RRA2849-1.html\\n\\n[4] Deepsniffer: A dnn model extraction framework based on learning architectural hints https://dl.acm.org/doi/pdf/10.1145/3373376.3378460\\n\\n[5] DNN model architecture fingerprinting attack on CPU-GPU edge devices. https://ieeexplore.ieee.org/abstract/document/9797366\\n\\n[6] Offsite-Tuning: Transfer Learning without Full Model https://arxiv.org/pdf/2302.04870\\n\\n[7] SoK: All You Need to Know About On-Device ML Model Extraction - The Gap Between Research and Practice https://www.usenix.org/system/files/usenixsecurity24-nayan.pdf\\n\\n[8] MODELGUARD: Information-Theoretic Defense Against Model Extraction Attacks https://www.usenix.org/system/files/sec24summer-prepub-409-tang.pdf\"}",
"{\"title\": \"Response to Reviewer kKBu\", \"comment\": \"Thanks for your time and valuable comments. In the response below, we provide answers to your questions in order to address the concerns and increase your confidence.\\n\\n**Q1: Questions on the threat model.** \\n\\n**R1:** Thank you for your thoughtful comments and concerns about the threat model. We appreciate your mention of work related to stealing parts of the embedding layer in LLMs [1]. In this paper, we focus on a different type of threat model, known as model recovery or model extraction attacks. Since 2016, researchers have studied how to extract parameters and structures from black-box encoders through these attacks [2]. These typically involve querying the black-box model, collecting input-output pairs, and training a replacement model to replicate the original's behavior.\\n\\nIn our design, the closed-source component acts as a black-box encoder, generating hidden representations for input data. As such, we believe the closed-source component in our design is vulnerable to model recovery attacks. To address this, we investigate how hiding certain layers can help defend against such threats. Further clarification is provided in the general response R2, and we hope this addresses and alleviates your concerns.\\n\\nIn response to the question regarding the potential for stealing a semi-open LLM, we address this concern in Section 5.3 of our paper. Our findings suggest that poorly designed semi-open models are indeed vulnerable to recovery attacks. For instance, in our experiments, designating a later decoder layer (e.g., the 29th layer in Llama2-7B) as the closed-source component while open-sourcing the remaining layers substantially increased the model's susceptibility to recovery. Specifically, when only the 29th layer was hidden, we observed an average recovery ratio approaching 100% under recovery attacks. This indicates that the recovered model could closely replicate the victim model\\u2019s behavior across various functionalities. These results highlight that without careful design, the risk of stealing a semi-open LLM remains significant.\\n\\n**Q2: The main insight of the paper is trivial.**\\n\\n**R2** Thank you for your detailed feedback. We appreciate the opportunity to clarify and expand on the contributions of our work.\\n\\nFirst, we would like to highlight that the question of \\\"hiding the first layer versus the last layer\\\" is not as trivial as it may appear. Prior work, such as SAP [3], has studied a similar problem, but they open-source the bottom six layers, and keep the remaining layers closed-source. They empirically investigated the customizability of this design for downstream tasks. However, in this paper, we demonstrate that such heuristic is not optimal for balancing customizability and resilience against recovery attacks. \\n\\nSecond, the primary message of our theorem is not simply that \\\"hiding earlier layers is better than hiding later layers.\\\" Instead, our theorem establishes the existence of a **transition layer**\\u2014a critical point in the model such that hiding layers before this transition offers strong resilience against recovery attacks, while hiding layers after it does not. The example of \\\"hiding the first layer versus the last layer\\\" was included for simplicity and to make the idea more accessible, but it represents just one instance of our broader result. To the best of our knowledge, this is the first work to both theoretically and empirically identify such a transition layer and rigorously prove its existence in the context of defending against recovery attacks. Our proof demonstrates the crucial role of the bottom layers in providing resilience, making them a key focus for effective defenses. Moreover, we introduced several novel techniques in our proof process that, to the best of our knowledge, have not been explored before. These techniques may have broader implications and could benefit the research community beyond this specific context.\\n\\nFinally, we appreciate the reviewer pointing out related empirical observations about the redundancy of later layers and their limited role in learning higher-order interactions. While this is not directly related to our main theorem, we believe our theoretical findings provide valuable insights into understanding LLMs in scenarios beyond defending against recovery attacks. We thank the reviewer for highlighting the potential broader implications of our results.\\n\\n[1] Stealing Part of a Production Language Model http://arxiv.org/abs/2403.06634 \\n\\n[2] Stealing Machine Learning Models via Prediction APIshttps://arxiv.org/pdf/1609.02943\\n\\n[3] A Split-and-Privatize Framework for Large Language Model Fine-Tuninghttp://arxiv.org/abs/2312.15603\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer ag9Z:\\n\\nWe sincerely thank you for your recognition of the motivation behind our semi-open model framework. However, as pointed out by several other reviewers, our motivation does indeed face concerns regarding its practicality. Therefore, in our general response 2, we have provided further clarification of our motivation and made the necessary revisions to the manuscript. We would greatly appreciate it if you could review our response at your earliest convenience. We respectfully request that you reconsider our manuscript and kindly revise your score.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer ag9Z\", \"comment\": \"Thanks for your time and valuable comments. We are pleased that you acknowledged the novelty and significance of our focused problem, promising results and straightforward presentation. In the response below, we provide answers to your questions in order to address the concerns and increase your confidence.\\n\\n**Q1: Is the \\\"attack datasets\\\" mentioned in Figure 2 the same as the \\\"recovery datasets\\\" discussed later in the paper?**\\n\\n**R1:** Thank you for your insightful comments and for highlighting this inconsistency. The terms \\\"attack datasets\\\" and \\\"recovery datasets\\\" refer to the same dataset, which is used to facilitate model recovery attacks aimed at replicating victim model\\u2019s behavior. We sincerely appreciate your feedback and have revised Figure 2 in the manuscript to ensure consistent terminology is used throughout.\\n\\n**Q2: Could you clarify the formula and the loss function used for RD(I)?**\\n\\n**R2:** Thank you for your comment on clarifying RD. We have revised the definition and implementation details in Sections 4.2 and 5.1. Below is the detail formulation and explanation of RD:\\n\\nThe Recovery Difficulty (RD) quantifies the challenge of recovering the closed-source module and is defined as: $\\\\text{RD}(I) = \\\\mathbb{E}_{\\\\mathbf{X},Y,{\\\\theta_0}(I)}\\\\left[\\\\ell\\\\left(f(\\\\mathbf{X};{\\\\theta}_0(I)), Y\\\\right)\\\\right]$\", \"where\": \"\\\\- $I$: The set of indices for the closed-source layers, indicating which hidden layers are kept private.\\n\\n\\\\- **${\\\\theta}_0(I)$**: The initial parameters of the replacement model. Parameters for hidden layers are randomly initialized, while parameters for public layers remain unchanged.\\n\\n\\\\- $\\\\mathbf{X}$: Input features sampled to target general capabilities from the underlying distribution.\\n\\n\\\\- $Y$: Labels corresponding to \\\\(\\\\mathbf{X}\\\\).\\n\\n\\\\- $f$: The final output of the semi-open model.\\n\\n\\\\- $\\\\ell$: The loss function, where we use cross-entropy loss in this paper.\\n\\n\\\\- $\\\\mathbb{E}$: The expectation over the joint distribution of random inputs, labels, and randomly initialized \\n\\nWe estimate RD using 1,500 samples drawn from two diverse datasets: the MMLU benchmark and Alpaca 52k. These datasets cover tasks such as text comprehension, summarization, generation, code writing, and mathematical reasoning. Additional details are provided in Section 5.1 and Appendix B.4. Moreover, hidden layers for different closed-source sets are randomly initialized using Xavier initialization with PyTorch's default settings. In this study, the RD is averaged over three random seeds (20, 42, 1234) to ensure robustness.\\n\\n**Q3: Could you clarify how fully-closed and semi-open models differ in practice?**\\n\\n**R3:** Thank you for your thoughtful feedback. Fully closed models provide strong vendor control but significantly limit user customization options. For example, users must rely on fine-tuning APIs provided by vendors, which handle the fine-tuning process using vendor-controlled computational resources. In this setup, users do not have access to the model's internal parameters, restricting their ability to perform detailed customization. \\n\\nIn contrast, semi-open models strike a balance between customizability and robustness against recovery attacks. This approach allows vendors to retain control over proprietary components, secure revenue streams, and reduce the computational burden of fine-tuning. Meanwhile, users benefit from the flexibility to customize open-source modules offline, optimizing them for specific tasks. We hope this explanation clarifies our motivations, as outlined in our response to R1 in the general response section.\\n\\n**Q4: Could you explain more about the distinctions between FT-all, FT-closed, and SEM in Section 5.1?**\\n\\n**R4:** Thank you for your thoughtful comments. In the R2 of our general response, we have provided a detailed explanation of the distinctions among the three strategies. These distinctions enable a more comprehensive evaluation of SCARA's effectiveness under our proposed threat model. Additionally, we have revised the manuscript to further clarify these three strategies in our threat model. Thank again for your constructive suggestions.\\n\\n**Q5: Can the row and column headers in the tables be made clearer by avoiding abbreviations?**\\n\\n**R5:** Thank you for your suggestion. While we aim to maintain the table\\u2019s length and visual clarity, we understand the importance of readability. Following your suggestion, we carefully revise the manuscript to minimize abbreviations where possible and improve clarity.\"}",
"{\"title\": \"Official Comment by Reviewer ag9Z\", \"comment\": \"I would like to thank the authors for their effort in rebuttal and their thoughtful response, which further clarifies their practical motivation. In the initial review, I found the overall structure of the paper to be coherent and the proposed theoretical contributions to be highly innovative, which led me to assign a score of 8. However, as noted by other reviewers, the inclusion of embedding models introduced some confusion. The revised scenario of on-premises deployment is more aligned with my expectations, and I appreciate that the authors have now focused on this setting. Given that, to the best of my knowledge, there has been no prior work that addresses this issue both theoretically and empirically, I continue to strongly recommend the acceptance of this paper.\"}",
"{\"title\": \"Official Response to Reviewer kKBu by Authors (2/3)\", \"comment\": \"**Q3. The authors cite Carlini 2024 to support their case that the problem is worth studying, but Carlini 2024 does not operate in this threat model.**\\n\\nThank you for your constructive feedback regarding our citations. We reference Carlini\\u2019s paper to demonstrate that model extraction attacks can occur at different levels. Specifically, Carlini\\u2019s [10] method focuses on **partially extracting** information from fully closed-source production LLMs via API access. Meanwhile, other approaches such as model recovery attacks focus on **replicating the functionality of the entire model**, which has also been extensively studied [11-14]. In the on-premises deployment [10] scenario we consider, attackers aim at replicating the closed-source modules hidden within a TEE so that they can replicate the functionality of the entire model. We would like to note here that this is a very common attack in private deployment, which has been widely studied [8, 11,15-17]. For example, [11] propose a functionally-equivalent extraction attack in model stealing, where attackers train a local model to mimic the target's functionality. Similarly, [15] introduce Knockoff Nets, which steal the functionality of victim models by querying and training a knockoff model based on the obtained predictions. These high-fidelity attacks have also been discussed in Carlini's paper, which notes \\\"In this paper, we focus on high-fidelity attacks. Milli et al. (2019)[18] showed that if an attacker can compute gradients of a target two-layer ReLU model, then they can steal a nearly bitfor-bit equivalent model.\\\".\\n\\n\\n\\n**Q4. The conclusions of the method are trivial: just evaluate the model to see how many of the first few layers you can reasonable close, and close those layers. If anyone cared about this problem, this is the natural thing to do.**\\n\\nThank you for your valuable feedback on our proposed method. While prior works [4-8] have studied which layers of small models should be protected in a TEE, few studies have explored which parts of LLMs should be secured in a TEE to defend against recovery attacks. SAP [19] proposes opening the bottom six layers while keeping the remaining layers closed-source. However, it remains unclear why open-sourcing the first few layers provides a good balance between customization and security. In our paper, we address this issue theoretically, showing that protecting the bottom layers offers better resilience against recovery attacks than protecting the upper layers, for the same closed-layer size.\\n\\nWe believe our method is non-trivial, as no prior work has explicitly addressed the balance between theft risk and customizability in private deployment scenarios for LLMs, nor has anyone studied the effectiveness of protecting different layers to achieve this balance. To our best knowledge, we are unaware of any work suggesting that bottom-up protection provides sufficient resilience with a smaller protection size. While we acknowledge the reviewer has pointed out related empirical observations about the redundancy of later layers and their limited role in learning higher-order interactions [20], this is not directly applicable to our scenario. To the best of our knowledge, only a few studies address safeguarding vendor interests while allowing greater customization for industrial users. We welcome any related work showing that hiding the first few layers is the natural approach and would be happy to incorporate further discussion based on your recommendations.\\n\\n[10] Stealing part of a production language model.http://arxiv.org/abs/2403.06634\\n\\n[11] High accuracy and high fidelity extraction of neural networks. https://www.usenix.org/system/files/sec20-jagielski.pdf\\n\\n[12] Grey-box extraction of natural language models. http://proceedings.mlr.press/v139/zanella-beguelin21a/zanella-beguelin21a.pdf\\n\\n[13] Can't Hide Behind the API: Stealing Black-Box Commercial Embedding Models. https://arxiv.org/pdf/2406.09355\\n\\n[14] Sentence embedding encoders are easy to steal but hard to defend. https://openreview.net/pdf?id=XN5qOxI8gkz\\n\\n[15] Knockoff Nets: Stealing Functionality of Black-Box Models https://arxiv.org/abs/1812.02766\\n\\n[16] Practical black-box attacks against machine learning. https://dl.acm.org/doi/pdf/10.1145/3052973.3053009\\n\\n[17] CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment. https://arxiv.org/pdf/2410.13903\\n\\n[18] Model reconstruction from model explanations. https://arxiv.org/pdf/1807.05185\\n\\n[19] A Split-and-Privatize Framework for Large Language Model Fine-Tuning http://arxiv.org/abs/2312.15603\\n\\n[20] Inheritune: Training Smaller Yet More Attentive Language Models https://arxiv.org/pdf/2404.08634\"}",
"{\"summary\": \"The paper introduces SCARA, a selective closed-sourcing approach for designing semi-open large language models (LLMs) that enhance customizability while maintaining resilience against recovery attacks. The authors develop an algorithm that strategically keeps only a few bottom layers closed-source, ensuring model flexibility without compromising security. They theoretically demonstrate a \\\"transition layer\\\" within deep transformer models, showing that recovery errors in layers before this point lead to recovery failure, while errors in later layers have a limited impact. SCARA estimates the optimal number of layers to hide using a novel metric based on initial recovery loss, bypassing the need for fine-tuning. The method is applied to five models ranging from 1.3B to 70B parameters, tested across six downstream tasks and sixteen recovery benchmarks. Results show that SCARA improves downstream performance while requiring over ten times fewer closed-source parameters than baselines, achieving improvements, especially in domain-specific tasks like Financial, with 30% higher performance on Llama2-70B. SCARA maintains comparable resilience against recovery attacks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces SCARA, a method that selectively closes only the bottom layers of semi-open large language models (LLMs) to enhance customizability while maintaining resilience against recovery attacks.\\n2. It provides a theoretical analysis of the existence of a transition layer in transformer-based models.\", \"weaknesses\": \"1. **Unclear Motivation for Semi-Open Models:** The market is dominated by closed-source models and fully open-source models. If customization needs are already addressed by existing fine-tuning services provided for closed-source models (e.g., API-based fine-tuning on closed models like GPT-4), it would be insightful to understand the specific motivations and advantages driving the development of a semi-open architecture.\\n2. **The threat model is not clear.** The threat model concerning recovery attacks on semi-open LLMs is insufficiently defined. The paper does not clearly specify the adversary's capabilities, such as the extent of access to the model's architecture, parameters, or outputs. This lack of clarity makes it challenging to assess the effectiveness of the proposed SCARA method in mitigating such threats.\\n3. **Insufficient Details on SCARA's Implementation:** The description of SCARA's methodology is vague, particularly regarding the fine-tuning-free metric used to determine which layers to keep closed-source. The paper does not provide a clear explanation of how this metric is calculated, the data required, or the computational resources involved etc.\\n4. **Evaluation minors:** While the authors present experimental results across multiple models and tasks, the evaluation lacks depth. The paper does not offer a comprehensive analysis of SCARA's performance compared to existing methods, nor does it explore potential trade-offs between customizability and security.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"General Response by Authors (2/2)\", \"comment\": \"**Q2: The threat model is unclear and please provide more details.**\\n\\n**R2:** In our design, the closed-source component functions as a black-box encoder, generating hidden representations for input data. The open-source component then uses these representations and is fine-tuned for specific downstream tasks. However, since 2016, researchers have studied how to extract parameters and structures from black-box models in attacks known as **model recovery** or **model extraction attacks** [17]. These attacks typically involve querying the black-box encoder, collecting input-output pairs, and training a replacement model that replicates the behavior of the original.\\n\\nWe adopt a common threat model [18-19], where we assume the adversary can query the semi-open model, access its final outputs, and retrieve the representations generated by the closed-source module. Additionally, as described in [20-21], the adversary is assumed to know the architecture of the closed-source module but does not have direct access to its parameters. By leveraging full access to the open-source components, the adversary can fine-tune a replacement model based on the known architecture of the closed-source module, using the retrieved representations or final outputs as training labels. We examine three recovery strategies under this threat model:\\n\\n\\\\- **FT-all**: The adversary fine-tunes both the replacement model for the closed-source module and the open-source module together, using the input and final output of the semi-open model.\\n\\n\\\\- **FT-closed**: The adversary fine-tunes only the replacement model of the closed-source module, keeping the parameters of the open-source module unchanged, using the input and final output.\\n\\n\\\\- **SEM**: The adversary fine-tunes the replacement model of the closed-source module using the input and the representations generated by the original closed-source model, without involving the open-source module. \\n\\nWe have revised the paragraph begining with \\\"semi-open model recovery attack\\\" in Section 3.1 to provide more details on threat model.\\n\\n\\n**Summary of the Paper Revisions**\", \"the_main_updates_are_summarized_as_follows\": \"1. Page 1, Sec. 1: Add related works on semi-open models and their applications. \\n2. Page 3, Figure 2: Correct the description of the attack datasets \\n3. Page 3, Sec. 3.1: Provide more detailed threat model of the Semi-open Model Recovery Attack. \\n4. Page 5, Sec. 4.2: Supplement the definition of Recovery Difficulty (RD) with additional details. \\n5. Page 6, Sec. 5.1: Add more details on SCARA's implementation\\uff0c and a brief description of the attack dataset sizes. \\n6. Page 9, Sec. 5.3: Rewrite the analysis of the trade-off between customizability and resilience in SCARA.\\n7. Page 9, Figure 6(a)(b): Revise the figure and caption to better illustrate the trade-offs between customizability and resilience to recovery attack. \\n8. Page 28, Appendix B.9: Add a discussion on how resilience transitions vary across specific capabilities. \\n\\n**References**\\n\\n[1] https://openai.com/index/introducing-vision-to-the-fine-tuning-api/\\n\\n[2] https://ai.meta.com/resources/models-and-libraries/\\n\\n[3] https://platform.openai.com/docs/guides/embeddings/embedding-models\\n\\n[4] https://cohere.com/embeddingsh \\n\\n[5] https://cloud.google.com/vertex-aih \\n\\n[6] https://www.llamaindex.ai/h \\n\\n[7] https://haystack.deepset.ai/ \\n\\n[8] https://unstructured.io/blog/understanding-embedding-models-make-an-informed-choice-for-your-rag \\n\\n[9] Vector Search with OpenAI Embeddings: Lucene Is All You Need https://arxiv.org/pdf/2308.14963\\n\\n[10] Unsupervised Anomaly Detection in Multi-Topic Short-Text Corporahttps://cnrs.hal.science/hal-04471726/file/EACL_2023_ait-saada.pd\\n\\n[11] Detection of Hate Speech using BERT and Hate Speech Word Embedding with Deep Model https://www.tandfonline.com/doi/full/10.1080/08839514.2023.2166719\\n\\n[12] Performance Optimization in the LLM World 2024 https://dl.acm.org/doi/10.1145/3629527.3651436\\n\\n[13] How Open Source Machine Learning Software Shapes AI https://dl.acm.org/doi/10.1145/3514094.3534167\\n\\n[14] [2024 AI Predictions | NVIDIA Blog](https://blogs.nvidia.com/blog/2024-ai-predictions/)\\n\\n[15] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)\", \"https\": \"//ieeexplore.ieee.org/document/8466590/?arnumber=8466590\\n\\n[16] https://www.preprints.org/manuscript/202307.2142/v2\\n\\n[17] Stealing Machine Learning Models via Prediction APIs https://arxiv.org/pdf/1609.02943\\n\\n[18] I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences https://dl.acm.org/doi/full/10.1145/3595292 \\n\\n[19] Can't Hide Behind the API: Stealing Black-Box Commercial Embedding Models http://arxiv.org/abs/2406.09355\\n\\n[20] Stealing Part of a Production Language Model http://arxiv.org/abs/2403.06634 \\n\\n[21] Grey-box Extraction of Natural Language Models https://proceedings.mlr.press/v139/zanella-beguelin21a/zanella-beguelin21a.pdf\"}",
"{\"title\": \"Official Response to Reviewer kKBu by Authors (1/3)\", \"comment\": \"We sincerely thank you for your time and valuable comments. In the response below, we provide answers in order to address the concerns.\\n\\n**Q1. TEE memory is numbered in the MBs, not GBs, and is not even large enough to fit a single layer of any model worth stealing**\\n\\nThank you for your constructive comments. In order to address your concerns, we provide further clarification regarding TEEs. While individual TEE hardware typically has limited memory, certain recent implementations allow for the extension of secure memory. For instance, Microsoft Azure recently introduced the DCesv5 series VMs [1] (released on October 25, 2024), which leverage Intel\\u2019s Trust Domain Extensions (TDX) [2]. TDX is a hardware-based TEE that facilitates the deployment of trust domains (TDs) and can be scaled to support more than 8GB of secure memory [3]. This capacity is sufficient to protect several layers in models with 70B parameters or larger (e.g., a single layer of Llama2-70B requires approximately 1.75G of memory under float16 precision), but is insufficient to prtect the entire model (e.g., Llama2-70B requires more than 140G under float16 precision) .\\n\\nBesides from the capacity limits, [4-8] also state that using TEE to protect all parameters of an entire model is not a practical solution. As noted in [8], \\\"Attempting to shield the complete DNN model within a TEE (shielding-whole-model) could result in a 50x reduction in the model\\u2019s speed\\\". Since TEEs are CPU-based, they do not provide the same level of efficiency for fine-tuning and customization on private data as GPUs. This limitation results in a trade-off, where security is prioritized at the cost of meeting users\\u2019 customization needs.\\n\\nWe have revised the introduction of our manuscript to further clarify the capacity of TEE. We hope this revision addresses your concern.\\n\\n\\n\\n**Q2. The setting the authors are considering simply does not exist in the real world, because it doesn't make any sense to only open some layers of the model.**\\n\\nThank you for your valuable feedback regarding the practicality of the setting we consider. We appreciate the opportunity to provide further examples that support the setting we have explored.\\n\\nFirst, we would like to emphasize that model asset protection in private deployments on users' local servers is a real-world challenge, which has been studied since 2018 [4-8]. For instance, a paper published in MobiSys in 2020 [4] states: *\\\"Due to the limited memory of the edge device\\u2019s TEE, we partition model layers into more sensitive layers (to be executed inside the device\\u2019s TEE) and a set of layers to be executed in the untrusted part of the operating system.\\\"* This scenario aligns with our own, wherein certain layers are partitioned to be executed within the TEE, while the remaining layers are executed outside the trusted environment. The key difference is that the study [4] primarily focuses on defending against membership inference attacks (MIA), whereas our work focuses on mitigating risks related to model theft.\\n\\nSecond, we consider fine-tuning open-sourced layers to allow users to customize them using their local, private data. In our design, we employ the SCARA to identify and isolate a few layers within the TEE to secure the model, while allowing the remaining layers to be accessible for customization. As discussed in Section 5.3 of our manuscript, hiding more layers reduces the model\\u2019s customizability for downstream tasks but does not significantly affect the resilience provided by securing only the first few layers identified by SCARA. Meanwhile, protecting only those layers identified by SCARA offers customizability comparable to that of a fully open-source model. Therefore, the semi-open model constructed using SCARA strikes a balance between model security and customization, effectively addressing the trade-off between security and customization in on-premises deployments [9].\\n\\n[1] https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/general-purpose/dcesv5-series?tabs=sizebasic\\n\\n[2] https://www.intel.com/content/www/us/en/developer/tools/trust-domain-extensions/overview.html\\n\\n[3] https://www.intel.com/content/www/us/en/developer/articles/technical/tdx-performance-isolated-partitioned-vms.html\\n\\n[4] Darknetz: towards model privacy at the edge using trusted execution environments http://arxiv.org/abs/2004.05703\\n\\n[5] No privacy left outside: On the (in-) security of tee-shielded dnn partition for on-device ml. https://arxiv.org/pdf/2310.07152\\n\\n[6] Confidential Inference via Ternary Model Partitioning. https://arxiv.org/abs/1807.00969\\n\\n[7] Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. https://arxiv.org/abs/1806.03287\\n\\n[8] TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models. https://arxiv.org/pdf/2411.09945\\n\\n[9] Securing AI Model Weights https://www.rand.org/pubs/research_reports/RRA2849-1.html\"}",
"{\"title\": \"Official Response to Reviewer ag9Z\", \"comment\": \"We deeply appreciate your recognition of the motivation behind our semi-open model framework and the threat model we proposed. We would like to further clarify our research intentions and their broader significance.\\n\\nAs highlighted by OpenAI in its response to the NTIA\\u2019s (National Telecommunications and Information Administration) request regarding LLMs with open weights: \\u201cWhen we did not observe significant misuse effects, this gave us the confidence to openly release the full model weights\\u201d [1]. This statement underscores the persistent risks of misuse, which have driven many LLM vendors to prefer releasing models through black-box APIs to mitigate these challenges. Unfortunately, as noted by OpenAI and others [2], such an approach often restricts downstream customizability. Addressing the tension between maintaining control and promoting openness has been a key motivation for our exploration of a semi-open model design.\\n\\nIn our work, we propose a selective closed-sourcing framework that seeks to balance controlled risk management with enhanced opportunities for downstream adaptation. By keeping only the early layers of the model closed-source while making the remaining layers accessible, this approach aims to address vendor concerns about misuse while empowering users with greater customizability. To the best of our knowledge, few prior studies have explicitly tackled this challenge. We see our framework as a preliminary step toward inspiring more open practices among LLM vendors, mitigating potential risks, and enabling broader access to powerful, adaptable models. We hope this approach supports academic research and diverse applications, fostering meaningful collaboration and innovation across domains.\\n\\nOnce again, we sincerely thank you for your valuable feedback, which has been instrumental in refining our work. Thank you for your thoughtful review and consideration.\\n\\n[1] OpenAI\\u2019s comment to the NTIA on open model weights\", \"https\": \"//www.fluid.ai/blog/open-source-llm-vs-closed-source-llm-for-enterprise-use-cases\"}",
"{\"title\": \"Humbly Seeking Further Discussion\", \"comment\": \"Dear Reviewer d5Gh,\\n\\nKindly note that the author-reviewer discussion period is currently ongoing and is approaching its conclusion. We would greatly appreciate it if you could review our latest general response (General Response 2) at your earliest convenience, where we have provided additional clarification on our **motivation**, particularly regarding the **\\u201csemi-open model\\u201d**. We sincerely hope this can address your concerns. We humbly request that you reconsider our manuscript and, if appropriate, consider upgrading your score. Thank you for your time and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"summary\": \"This paper presents SCARA, a method to identify decoder layers to hide in decoder-only LLMs. The authors provide theoretical proof of transition layers, where early errors are amplified in subsequent layers. They introduce RD to assess post-recovery performance when specific layers are hidden. Experiments show that SCARA, by hiding only a few layers, achieves a recovery ratio close to baselines while maintaining customization performance similar to fully open approach. The experiments also confirm the existence of transition layers in the models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. The paper is well-written, featuring thorough experiments and clear explanations from both theoretical and empirical perspectives.\\n2. The overall layout is visually pleasing, and the figures are diverse and effectively illustrate the content, aiding readers' understanding.\\n3. It proposes a straightforward and effective method for constructing a semi-open model by hiding only a few layers, while achieving baseline-level resilience and customization performance comparable to the fully open setting.\\n4. The insight regarding the existence of a transition layer contributing to resilience is particularly compelling, with a detailed theoretical explanation that enhances understanding.\\n5. The authors provide comprehensive empirical validation across multiple architectures and benchmarks, covering models of various sizes (1.3B-70B), and testing customizability and recovery performance on several benchmarks. They also conducted experiments on recovery datasets of different sizes, demonstrating sufficient experimental rigor.\\n6. The authors proposed additional enhancements to the original baseline, strengthening the protection of baseline SAP and highlighting SCARA\\u2019s effectiveness in preserving resilience.\\n7. The authors empirically validated their theory of the transition layer\\u2019s existence and pointed out that smaller models exhibit transition layers earlier than larger models.\\n8. The authors clearly identified the limitations of the SCARA method, noting its ineffectiveness on small models (OPT-350M) and its inability to defend against other adversary attacks.\\n9. The proposed SCARA algorithm has clear practical applications, offering a viable solution for enhancing the customizability of semi-open models while preserving comparable resilience.\", \"weaknesses\": \"1. One mathematical notation in Section 4.2 is unclear. The loss function $\\\\ell$ for RD(I) is not specified, making it confusing.\\n2. Figures 1 and 2 have minimal captions and small text, reducing readability and limiting their ability to convey insights.\", \"questions\": \"1. Is the \\\"attack datasets\\\" mentioned in Figure 2 the same as the \\\"recovery datasets\\\" discussed later in the paper?\\n2. Could you clarify the formula and the loss function used for RD(I)?\\n3. Could you clarify how fully-closed and semi-open models differ in practice?\\n4. Could you explain more about the distinctions between FT-all, FT-closed, and SEM in Section 5.1?\\n5. Can the row and column headers in the tables be made clearer by avoiding abbreviations?\\n6. Could you explain more about potential future work that could be included in the paper?\\n7. Could the authors clarify what the value 0.00 represents in Table 1 and Table 2?\\n8. The authors discussed the impact of datasets of different lengths on the effectiveness of SCARA in the experimental section, but these datasets did not appear in the setup. Could the authors provide a detailed introduction to the composition of these datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper studies the problem of how to design semi-open models (i.e., models whose weights are only partially open-sourced) that can simultaneously also be resilient to recovery attacks. The paper finds that a transition layer exists, such that even small recovery errors in layers before this layer can lead to recovery failure. Building on these insights, the paper proposes an approach called SCARA that keeps only a few bottom layers as closed-source. With this new approach, the paper shows that it is possible to improve downstream customization performance while maintaining similar resilience.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written, and the presentation is clear and clean.\\n\\n2. The approach is well motivated --- it first starts from an empirical observation that closed-sourcing the first two layers offers significantly greater resilience than the last two, while the model shares similar customizability under the two cases. This implies that close sourcing the later layers may be the optimal solution for keeping the resistance to recovery attacks. The paper subsequently further formally establishes this finding with rigorous theoretical analysis, showing the existence of a transition layer such that even small recovery errors in layers before this layer can lead to recovery failure. This also intuitively makes sense --- when the attacker is asked to recover the earlier layers as opposed to the later layers, the errors in early layers will be amplified by later layers. This asymmetry is natural. \\n\\n3. Based on this insight, the paper also proposes an effective approach for the selectively closed-sourcing model to defend against recovery attacks. The experiment results support the effectiveness of the approach. \\n\\nOverall, the paper is nicely done.\", \"weaknesses\": \"One outstanding weakness of this paper is that the threat model considered may not be practically relevant. It seems the authors coined the semi-open model's scenario, that seems not really exist in the real world.\\n\\nCurrently, the most common setups are either open-source or closed-source. For close-source context, when developers do want their users to customize their models, the standard practice is to deploy fine-tuning APIs (e.g., https://platform.openai.com/docs/guides/fine-tuning) rather than partially open-source a part of the model. It seems to make no sense to only open-source the first few layers of a model to enable customization. Because the customization anyway still needs the involvement of the closed-source developers --- so they can fine-tune and connect the first few layers and the later layers to really deploy the model. Then, why not just close-source all weights and directly ask the users to upload custom data, and then the closed-source developers fine-tune and deploy the model for the users, like what is being done in fine-tuning APIs? \\n\\nI worry that if not developers will do the partial open-sourcing like the authors of this paper consider, then the problem itself may not hold.\", \"questions\": \"Can the authors explain and clarify why the semi-open models are practically relevant?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"title\": \"Response to Reviewer kKBu (2/2)\", \"comment\": \"**Q4:The method just turns the insight into a metric:** Stating that it is non-trivial to optimize a non-differentiable metric, and then proceeding to say that you just take a Taylor expansion, does not convince me that you did anything non-trivial here. The final metric is neither computationally efficient nor theoretically principled. It is just \\\"what happened if I closed off the first N layers of the model\\\".\\n\\n**R4:** Estimating the testing performance of an LLM after a recovery attack, without relying on fine-tuning, is a non-trivial challenge. While implementing the estimator is relatively straightforward, the theoretical insights behind this metric go beyond a simple application of Taylor expansion. Specifically, we use Taylor expansion and neglect higher-order terms based on the theoretical insight that gradient descent does not deviate significantly from the initial point, as demonstrated in convergence analyses for neural networks [10-12]. This approach is both non-trivial and grounded in theoretical principles.\\n\\n\\n\\nRegarding computational efficiency, this metric is advantageous as it requires only the evaluation of a partially initialized LLM on a small test set, without necessitating fine-tuning. In contrast, directly estimating recovered performance after an extraction attack would require fine-tuning, which is computationally expensive and time-consuming.\\n\\n\\n\\n**Q5:The evaluation is not fair:** The closed part of your model does not function similarly to an embedding model, so SEM likely is not a fair baseline here.\\n\\n**R5:** As we discussion in R2, to eliminate potential confusion, we have updated the paper and responses to replace \\\"embedding model\\\" with \\\"encoder.\\\" There is a rich literature in stealing/extracting closed-source encoder [4-9], where SEM is one of the attack in this type. We include this attack method since it is one of the main attack method used for stealing or extracting the encoder. Additionally, the SEM attack is used in the grey-box extraction to steal the classification layer in a grey-box (i.e., semi-open) model. \\n\\n\\n\\n\\n\\n[10] Neural tangent kernel: Convergence and generalization in neural networks. https://arxiv.org/pdf/1806.07572\\n\\n[11] Convergence analysis of recurrent neural networks[M]. Springer Science & Business Media, 2013. \\n\\n[12] A convergence analysis of gradient descent for deep linear neural networks. https://arxiv.org/pdf/1810.02281\"}",
"{\"title\": \"General Response by Authors (1/2)\", \"comment\": \"Dear area chair and reviewers,\\n\\nWe sincerely thank the reviewers for their time and valuable comments. Overall, the reviewers appreciated our innovative theoretical insights (d5Gh, 1rG5, ag9Z), particularly the identification of transition layers (ag9Z). All reviewers also commended the comprehensive evaluation across various model sizes and benchmarks, as well as the clear and effective presentation of our findings (kKBu, ag9Z, 1rG5). Additionally, some reviewers highlighted the effectiveness of SCARA (1rG5) and the potential impact of our work on future research directions (ag9Z).\\n\\nWe acknowledge the reviewers' concerns regarding the real-world applicability of semi-open models and the details of our threat model. In this general response, we aim to clarify our motivation and provide a more detailed explanation of the threat model and the adversary\\u2019s capabilities. In the individual responses, we address each specific comment and question raised. We hope these clarifications and responses adequately address the reviewers\\u2019 concerns.\\n\\n**Q1:** **Are semi-open models widely used? what is the advantage of these semi-open models? What is the motivation of designing semi-open models?**\\n\\n**R1:** Companies like OpenAI offer fine-tuning APIs [1] for closed-source models (e.g., GPT-4), while META open-sources models like the Llama series [2] for customization. Semi-open models, which combine closed-source embeddings (e.g., OpenAI's text-embedding-ada-002 [3] , Cohere's embeddings [4], or Google Vertex AI [5]) with open-source modules (e.g., LlamaIndex [6], Haystack [7]), have gained popularity for tasks like search [8], recommendation systems [9], anomaly detection [10], and classification [11]. This hybrid approach benefits both vendors and users. Vendors retain control over proprietary components, generate revenue, and reduce the computational demands associated with fine-tuning for downstream tasks [12-14]. Users, in turn, can customize open-source modules offline to better optimize performance for specific tasks [15-16].\\n\\nDespite their success in tasks like search and recommendation systems, semi-open models face challenges with more complex tasks, such as deep reasoning, knowledge-intensive operations, and problem-solving in code or math. To address these challenges, we propose a novel semi-open framework that enhances performance on complex tasks while ensuring protection against recovery attack. Our approach partitions a pretrained LLM into closed-source and open-source components, fully leveraging the model's capabilities for handling complex tasks. We theoretically prove that errors in the early decoder layers significantly impact performance, while later layers are less critical. Based on this insight, we introduce SCARA, a design that closed-source key early layers while allowing users to customize open modules. To optimize this design, we propose a fine-tuning-free metric, ``recovery difficulty'', to determine the optimal partition point. This framework provides a balance between high customizability and strong resilience against recovery attacks, advancing the capabilities of semi-open models.\\n\\nWe have revised the first paragraph in the introduction to provide more application of semi-open models.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer d5Gh:\\n\\nWe sincerely thank you for raising your concerns regarding the practicality of the motivation behind the problem we addressed. In our general response 2, we have provided further clarification of our motivation and revised the manuscript accordingly. We would greatly appreciate it if you could review our response at your earliest convenience. We respectfully request that you reconsider our manuscript and kindly revise your score.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"comment\": \"Multiple reviewers made it clear that the authors are studying a threat model that is not well motivated, and the conclusions of the method are basically trivial. In response, the authors have pivoted their paper to focus on a setting where the original model developers hosts some layers are on the TEE that are closed, and some layers outside the TEE that can be finetuned. However, TEE memory is numbered in the MBs, not GBs, and is not even large enough to fit a single layer of any model worth stealing. The setting the authors are considering simply does not exist in the real world, because it doesn't make any sense to only open some layers of the model. Because the problem of \\\"what LLM layers should be open and which should be closed to prevent model stealing during finetuning\\\" has no practical relevance, it has not been studied by any prior work. The authors cite Carlini 2024 to support their case that the problem is worth studying, but Carlini 2024 does not operate in this threat model. Without any prior work to measure their method against, we have to just evaluate the method on its merits. And the conclusions of the method are trivial: just evaluate the model to see how many of the first few layers you can reasonable close, and close those layers. If anyone cared about this problem, this is the natural thing to do, and as another reviewer has pointed out, this is what embedding models already do.\", \"in_summary\": \"the paper proposes a trivial solution to an unrealistic threat model. I recommend a reject.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer 1rG5:\\n\\nWe sincerely thank you for raising your concerns regarding the practicality of the motivation behind the problem we addressed. In our general response 2, we have provided further clarification of our motivation and revised the manuscript accordingly. We would greatly appreciate it if you could review our response at your earliest convenience. We respectfully request that you reconsider our manuscript and kindly revise your score.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer kKBu:\\n\\nWe sincerely thank you for raising your concerns regarding the practicality of the motivation behind the problem we addressed. In our general response 2, we have provided further clarification of our motivation and revised the manuscript accordingly. We would greatly appreciate it if you could review our response at your earliest convenience. We respectfully request that you reconsider our manuscript and kindly revise your score.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"title\": \"Response to authors\", \"comment\": \"I would like to thank the authors for their response. My concern regarding the practical relevance of the problem considered here still holds. I also read the reviews by Reviewer d5Gh & kKBu, and it seems that they also share the same concern, and I agree with them.\\n\\nRegarding the authors' rebuttal, the authors cited the embedding model from companies as an example to support the practicality of the problem here. This is a smart idea, but it can be easily disputed. By \\\"embedding models,\\\" they inherently already choose to close-source the first multiple layers of these models --- that the paper suggests doing. And for these embedding models, there are no such things as open-sourcing the later layers of the model. Moreover, it anyway makes no sense to only open-source the later layers of a model in any existing settings. This makes the main conclusion from this paper seem to be trivial.\\n\\nOverall, I think this paper is quite standard in terms of the conduct of the study & the writing. So, I initially gave a 6.\\nHaving said that, the problem itself studied in this paper is not a very valid and convincing problem. I will keep my current rating, but I won't strongly recommend acceptance.\"}",
"{\"comment\": \"Thank you for your efforts in rebuttal. Parts of my concerns are addressed, however, the \\\"semi-open model\\\" or \\\"grey-box\\\" motivation under the LLM context is still not clear. I decide to maintain my score.\"}",
"{\"title\": \"Response to Reviewer kKBu (1/2)\", \"comment\": \"Dear Reviewer kKBu:\\n\\n\\n\\nThank you for your time and comments. Below, we have provided responses to your questions to address your concerns.\\n\\n\\n\\n**Q1: Semi-Open LLMs exist**: Your provided references for this are End-to-end systems that have a closed-source embedding model as one component of the pipeline. Nothing like what you are trying to attack/defend, an LLM that actually has some number of the layers open and some number of the layers closed, exists.\\n\\n\\n\\n**R1:** Thank you for acknowledging the existence of end-to-end systems with semi-open pipelines that combine closed-source embedding models and open-source components. This aligns with the discussion in our paper, where the closed-source module serves as a component providing hidden representations, while the open-source module remains customizable. We use the term \\\"model\\\" broadly to describe these pipelines. However, if the reviewer prefers the term \\\"semi-open pipeline\\\" over \\\"semi-open model,\\\" we are open to adopting this terminology, although we believe \\\"model\\\" effectively conveys the overarching concept. Furthermore, \\\"semi-open models\\\" are also referred to as \\\"grey-box models\\\" in references [1-4].\\n\\n\\n\\n**Q2: Threat model**: Stating that the closed-source component acts as a black-box embedding layer does not actually make it an embedding layer. The SVD attack of Carlini et al. only works when there is exactly 1 layer working to take the inputs from representation space to vocabulary space. So there is no analogue here. There is still no evidence that the attack you are studying is a realistic threat.\\n\\n**R2:** We use the term \\\"embedding model\\\" to broadly describe an encoder model that transforms input sentences into high-dimensional real vectors, which aligns with the role of the closed-source component in our framework. However, we acknowledge that in some contexts, \\\"embedding model\\\" may specifically refer to the output of the last decoder layer in large language models. To eliminate potential confusion, we have updated the paper and responses to replace \\\"embedding model\\\" with \\\"encoder.\\\" There is a rich literature on stealing/extracting closed-source encoders such as [4-9].\\n\\n\\n\\n**Q3: The paper's insight is trivial**: The existence of a transition layer follows immediately from the straightforward observation of compounding error at each layer. None of the techniques in the paper are novel.\\n\\n**R3:** The observation of compounding error at each layer suggests a gradual change in the impact of individual layers in defending against recovery attacks. However, this gradual change does not directly lead to the sharp transition demonstrated in our theoretical results. Our contribution lies in rigorously establishing a theoretical analysis that proves the existence of this sharp transition, as acknowledged by all other reviewers.\\n\\n[1] Risks and Opportunities of Open-Source Generative AI https://arxiv.org/pdf/2405.08597 \\n\\n[2] Grey-box Extraction of Natural Language Models https://proceedings.mlr.press/v139/zanella-beguelin21a.html\\n\\n[3] A Comparative Analysis of White Box and Gray Box Adversarial Attacks to Natural Language Processing Systems https://www.atlantis-press.com/proceedings/iciaai-24/126004152\\n\\n[4] I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences https://dl.acm.org/doi/pdf/10.1145/3595292\\n\\n[5] StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning https://arxiv.org/pdf/2201.05889\\n\\n[6] Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries [2406.10280 (arxiv.org)](https://arxiv.org/pdf/2406.10280)\\n\\n[7] Refine, Discriminate and Align: Stealing Encoders via Sample-Wise Prototypes and Multi-relational Extraction https://arxiv.org/abs/2312.00855\\n\\n[8] Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services https://arxiv.org/pdf/2408.02814\\n\\n[9] Sentence Embedding Encoders are Easy to Steal but Hard to Defend https://publications.cispa.de/articles/conference_contribution/Sentence_Embedding_Encoders_are_Easy_to_Steal_but_Hard_to_Defend/25287991?file=44691499\"}",
"{\"title\": \"Response to Reviewer kKBu\", \"comment\": \"**Q3: The actual implementation of the method is not sophisticated. It just takes this straightforward insight and turns it into a metric.**\\n\\n**R3:** Thank you for your detailed feedback. Our goal is to predict the recovery ratio, which evaluates: \\\"If the first $N$ layers of the model are hidden, to what extent can adversaries recover the model's general capabilities?\\\" As discussed in Section 3.2, directly calculating this metric is highly computationally intensive. This process requires constructing a large query set and performing extraction attacks, which involve fine-tuning the entire semi-open model to replicate the original model for each possible hidden module configuration.\\n\\nTo address this challenge, we aimed to develop a metric that avoids fine-tuning while being highly correlated with the recovery ratio. This task is non-trivial because the score function used to measure recovery (e.g., accuracy in a reading comprehension task) can be non-differentiable. \\n\\nTo overcome this, we used the cross-entropy loss as a differentiable surrogate metric, which is negatively correlated with the score function. Using Taylor expansion and theoretical insights from prior work, we leveraged the observation that for large neural networks relative to the dataset size, the difference between the fine-tuned parameters $ \\\\|{\\\\theta}_{\\\\text{FT}}(I,\\\\mathcal{D}) - {\\\\theta}_0(I)\\\\|_2 $ is minor. Previous research has shown that for models like single-layer ReLU networks, this difference is of the order $\\\\mathcal{O}\\\\left(\\\\frac{|\\\\mathcal{D}|}{\\\\sqrt{N}}\\\\right)$, where $N$ is the number of model parameters, which is significantly larger than the dataset size in large language models.\\n\\nBy incorporating these insights, we derived our final metric, providing a computationally efficient and theoretically grounded approach to approximate the recovery ratio.\\n\\n\\n\\n**Q4: The evaluation isn't really fair. The authors evaluate against SEM. But SEM just wants to recover the embedding and the authors are trying to show what happens if they hide the early parts of the network. This seems like an indication that this isn't a particularly realistic threat model.**\\n\\n**R4:** Thank you for your comments. As discussed in the general response section, the closed part of our model functions similarly to an encoder in our design. SEM is an attack strategy aimed at replicating the black-box encoder, which we believe aligns with our threat model and is applicable to our setting. If the adversary successfully replicates the closed-source module using SEM, it would constitute a successful model stealing, effectively compromising the entire model. We appreciate the reviewer pointing out the potential confusion regarding SEM, and we have revised the paper to provide a clearer explanation of the SEM process.\\n\\n\\n**Thanks for your helpful comments and we are looking forward to discussing with you to further improve our paper!**\"}",
"{\"title\": \"Response to Reviewer d5Gh\", \"comment\": \"Thanks for your kind and helpful comments and we are looking forward to discussing with you to further improve our paper! We are encouraged that you appreciated our contributions, including the theoretical proof and comprehensive experiments. In the response below, we provide answers to your questions in order to address the concerns.\\n\\n**Q1:Unclear motivation for semi-open models.**\\n\\n**R1:** Thank you for your constructive and valuable comments. We greatly appreciate your feedback and the opportunity to clarify our motivations, as outlined in R1 within the general response section. We hope our response provides a clearer understanding of the motivations behind our approach. \\n\\n\\n\\n**Q2: The threat model is not clear.** \\n\\n**R2:** Thank you for your constructive and valuable comments. In our general response R2, we have further clarified our threat model and provided a more detailed description of the adversary's capabilities. \\n\\n\\n\\n**Q3: Insufficient** **details on SCARA's implementation**\\n\\n**R3:** Thank you for your comments on SCARA's implementation. We have revised the implementation details part in Section 5.1.The key component of SCARA is the **Recovery Difficulty (RD)**, which quantifies the difficulty of recovering the closed-source module. It is defined as:\\n\\n$$\\\\text{RD}(I) = \\\\mathbb{E}_{\\\\mathbf{X},Y,{\\\\theta_0}(I)}\\\\left[\\\\ell\\\\left(f(\\\\mathbf{X};{\\\\theta}_0(I)), Y\\\\right)\\\\right]$$\", \"where\": \"\\\\- $I$: The set of indices for the closed-source layers, indicating which hidden layers are kept private.\\n\\n\\\\- **${\\\\theta}_0(I)$**: The initial parameters of the replacement model. Parameters for hidden layers are randomly initialized, while parameters for public layers remain unchanged.\\n\\n\\\\- $\\\\mathbf{X}$: Input features sampled to target general capabilities from the underlying distribution.\\n\\n\\\\- $Y$: Labels corresponding to \\\\(\\\\mathbf{X}\\\\).\\n\\n\\\\- $f$: The final output of the semi-open model.\\n\\n\\\\- $\\\\ell$: The loss function, where we use cross-entropy loss in this paper.\\n\\n\\\\- $\\\\mathbb{E}$: The expectation over the joint distribution of random inputs, labels, and randomly initialized parameters of the closed-source module. This expectation is approximated in practice.\\n\\n**--Approximation of RD--**\\n\\n\\\\- **Evaluation Dataset**: To estimate RD, we construct an evaluation set that represents the general capabilities of the victim model. The dataset includes 1,500 samples evenly drawn from two diverse datasets: the MMLU benchmark and Alpaca 52k. These datasets cover tasks such as text comprehension, summarization, generation, code writing, mathematical reasoning, and knowledge reasoning. For more details, please refer to Section 5.1 and Appendix B.4.\\n\\n\\\\- **Random Initialization of Closed-Source Parameters**: To evaluate RD for different closed-source layer sets, the hidden layers are randomly initialized using Xavier initialization with PyTorch's default settings. The RD is averaged over three random seeds (20, 42, and 1234) during SCARA implementation.\\n\\n**--Computational Resources for RD--**\\n\\n\\\\- For models with up to 7 billion parameters, RD calculation and SCARA execution are performed on a system with **4\\u00d7RTX 4090 GPUs**, completing in approximately **8 minutes**.\\n\\n\\\\- For larger models like Llama2-70B, the process is carried out on **4\\u00d7A100 GPUs**, taking around **30 minutes**.\\n\\n\\n**Q4: No comprehensive comparison to existing methods:** \\n\\n**R4:** Thank you for your constructive and valuable comments on the comparison to existing methods. While the design of semi-open models has been widely studied in areas such as clustering, to the best of our knowledge, in the domain of complex tasks such as deep reasoning, knowledge-intensive operations, and problem-solving in code or math, SAP [1] is the only approach that serves as a basis for a semi-open LLM construction framework with which we can directly compare. We would sincerely welcome any suggestions for related approaches, and we would be happy to incorporate additional experiments based on your recommendations.\\n\\n\\n\\n**Q5: No analysis on potential trade-offs between customizability and security.**\\n\\n**R5:** We appreciate your feedback regarding the trade-offs between customizability and security. We have revised our manuscript to further explore this trade-off, Specifically, we examine these aspects using Llama2-7B and Phi-2. As shown in Section 5.3, we barely observe significant trade-offs in closed-source set placement but a clear trade-off in the number of hidden layers for smaller models like Phi-2. \\n\\n\\n**Thanks for your kind and helpful comments and we are looking forward to discussing with you to further improve our paper!**\\n\\n[1] A Split-and-Privatize Framework for Large Language Model Fine-Tuning https://arxiv.org/abs/2312.15603\"}"
]
} |
|
1R5BcYS8EC | SysCaps: Language Interfaces for Simulation Surrogates of Complex Systems | [
"Patrick Emami",
"Zhaonan Li",
"Saumya Sinha",
"Truc Nguyen"
] | Surrogate models are used to predict the behavior of complex energy systems that are too expensive to simulate with traditional numerical methods.
Our work introduces the use of language descriptions, which we call "system captions" or SysCaps, to interface with such surrogates.
We argue that interacting with surrogates through text, particularly natural language, makes these models more accessible for both experts and non-experts.
We introduce a lightweight multimodal text and timeseries regression model and a training pipeline that uses large language models (LLMs) to synthesize high-quality captions from simulation metadata.
Our experiments on two real-world simulators of buildings and wind farms show that our SysCaps-augmented surrogates have better accuracy on held-out systems than traditional methods while enjoying new generalization abilities, such as handling semantically related descriptions of the same test system.
Additional experiments also highlight the potential of SysCaps to unlock language-driven design space exploration and to regularize training through prompt augmentation. | [
"surrogate models",
"multimodal text and timeseries models",
"language-interfaced regression"
] | Accept (Poster) | https://openreview.net/pdf?id=1R5BcYS8EC | https://openreview.net/forum?id=1R5BcYS8EC | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xkG0RzQiM8",
"soJZYufbl3",
"m8Cj32TGet",
"kDJvcnPAfn",
"kBzBcyODsE",
"iDJFr5vfne",
"fE51PnxY4w",
"ePe2PSh9Cx",
"Y8ZqlZKZ4G",
"Wwgf8KkfVh",
"WDJMmRJcUB",
"TAL525GnH3",
"QtWgEIuuyc",
"LZdNlCrQJD",
"LEd5a8XRyA",
"Husd2LNLeq",
"GUrB81mOS6",
"DEuZ2sAjXf",
"C2lWY54O3J",
"BJ25I5aRIb"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732082776808,
1730795624836,
1732530377796,
1732384695020,
1732081220345,
1735003402210,
1732389158422,
1732385609794,
1732640268038,
1732561648101,
1732637400322,
1730379432088,
1737524181337,
1732388382513,
1732717001332,
1731911211115,
1732084114274,
1732551054525,
1730692012320,
1731925322857
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_38tw"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_Gxzq"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Area_Chair_ActA"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_2gzL"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_Gxzq"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_2gzL"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_Gxzq"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_2gzL"
],
[
"ICLR.cc/2025/Conference/Submission12317/Reviewer_Gxzq"
]
],
"structured_content_str": [
"{\"title\": \"Author response\", \"comment\": [\"Dear Reviewer 2gzl,\", \"Thank you for your review and for praising the presentation, motivation, and empirical validation of the work.\", \"**[W1,W2 - pretrained vs. finetuned embeddings]:** We will clarify in our Introduction that we propose to *fine-tune* text embedding models initialized from pretrained weights. We are working on adding a new ablation to the PDF that shows the performance of our SysCaps-nl model without fine-tuning the BERT text encoder (i.e., only using the pretrained embeddings). We are also going to run an ablation without a \\\"SOTA embedding model\\\"---\\u201cSimCSE-RoBERTa-large\\u201d , which is the current most-downloaded text encoder on HuggingFace as of 11/19/24.\", \"**[W3 - classifier hyperparams]:** We will add the hyperparameter details for the multiclass classifier to the appendix. We implement the classifier on top of the text encoder by adding a linear layer for each attribute type, where this layer predicts logits for each attribute's classes. We use AdamW with lr=3e-4, early stopping with patience 5, batch size 128, and max epochs 100. We do not freeze the text encoder weights.\", \"**[W4 - kv vs. nl]:** On the comparison between SysCaps-kv on the building dataset and SysCaps-nl on the wind dataset, we believe the relative performance difference is easily explained. The wind dataset, which only has 500 total systems (300 training, 100 validation, 100 testing), the SysCaps-nl model uses prompt augmentation to increase both the quantity and quality (e.g., higher variability in syntax and style) of the training data. On the buildings dataset, SysCaps-nl does not use prompt augmentation. This is because the size of the buildings dataset was too large to create multiple captions per building. We expect that training the SysCaps-nl model on the wind dataset *without* prompt augmentation would result in similar (or worse) performance than SysCaps-kv. We will run this experiment and add the results.\", \"**[W5 - non-experts]:** We agree that our method has the potential to help explain system attributes for surrogate models to non-experts. The current manuscript lacks discussion about this, which we will rectify. We emphasize, however, that evaluating this goes beyond the scope of our current methodology. For example, we imagine this would require instructing the LLM to identify system attributes whose names might be inscrutable to humans, and to then use the rest of the metadata and system prompt to come up with better names that are easier for non-experts to use. Then, the LLM could be instructed to use these new names to create the SysCaps. We will highlight this when discussing future work and potential impacts in our Conclusion.\", \"**[Q1 - Template sentences]:** We did not try training with template sentences. However, we consider the \\u201ckey-value\\u201d captions used by the SysCaps-kv models, which are lists of attribute names and value `key1:value1 | key2:value2 | \\u2026` as an estimate for the performance of sentence templates. We believe this is a reasonable assumption because a sentence template just adds extra words which do not possess new information about the system being simulated.\", \"**[Q2 - RFE]:** There are only 5 attributes for the wind dataset, which is relatively few and so RFE is not needed. The captions generated by the LLM for the wind dataset are also relatively short.\", \"**[Q3 - augmenting training]** Based on the results of our prompt augmentation experiment (Section 6.5), we have evidence that, yes, increasing both the quantity and quality of the training data via augmentation would lead to better performance. This can include generating extra captions by prompting the LLM to create captions using attribute synonyms. We will add more qualitative examples of the LLM-generated captions to the appendix in the revised PDF. Here is an example where the LLM paraphrased (bolded):\", \"\\\"The wind plant is designed with a cluster layout, featuring 40 turbines of varying heights. Each turbine boasts a rotor diameter of 130 meters, providing ample sweep area to capture the wind's energy. With a **mean turbine spacing** of 7 times the rotor diameter, the plant is optimized for efficient energy production, while minimizing visual impact and land usage.\\u201d\", \"\\u201cThe wind plant features a layout of multiple strings, with 73 turbines standing tall at an impressive rotor diameter of 130 meters. The **turbines are spaced at an average distance** of four times the rotor diameter, resulting in a highly efficient and productive wind farm. Each turbine is equipped with a rated power of 3.4 megawatts, making it capable of generating a significant amount of electricity from the wind.\\u201d\", \"For Question 4, perhaps you can clarify\\u2014is the question about whether we could improve performance by using a threshold to filter out \\u201clow quality\\u201d captions?\"]}",
"{\"summary\": \"This paper describes a set of lightweight models to model complex energy systems, using an LLM to generate prompts and a encoder and bidirectional time-series model to predict energy consumption.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors provide a clear explanation of their setup, flow of data across multiple components and their evaluation and analysis.\", \"(I do not feel sufficiently well-acquainted with this domain to evaluate the predictive contribution or performance of the models.)\"], \"weaknesses\": [\"I presume the authors' choice of models is due to resource constraints and aiming for a lightweight setup, but it feels like it has multiple components when it could be a simpler setup with fewer model components. For instance, a BERT-type model could also be used for time-series prediction (as opposed to only text encoding). Similarly the two-step process of generating prompts using a separate LLM and then encoding that prompt with an encoder could be avoided by just using the LLM directly and fine-tuning it.\"], \"questions\": \"I would be interested in seeing the performance compared between time-series-centric models and current generic architectures.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you\", \"comment\": \"I thank the reviewers for their comprehensive reply to my (and others') review. I have read all the discussions and I feel like the paper has been significantly strengthened by the authors during this discussion by addressing many of the reviewers' concerns.\\nI don't have any more questions at this point. Still, I would still like to stress that even though evaluating the technical feasibility of the proposed approach is a very valid first step towards the envisioned goals (of free-form text understanding to condition surrogate models), I cannot consider it more than a \\\"fair\\\" contribution in terms of novelty and impact. For this reason, and because the global score scale jumps from 6 directly to 8 (which has the very poor label \\\"accept, good paper\\\"), I don't think I can improve the scores any further. Just to be clear, I do think this is a \\\"good paper\\\" (which is why I find the labelling of the scale so lacking), but I cannot justify assigning it a score of 8, as that would imply a stronger endorsement than I am prepared to give. My evaluation remains that the paper makes a meaningful contribution and is worthy of publication, but it does not stand out as particularly innovative or high-impact to warrant a higher score. I hope this feedback helps clarify my position, and I thank the authors again for their diligent efforts in addressing the reviewers\\u2019 concerns.\"}",
"{\"title\": \"Question for reviewer\", \"comment\": \"Thanks again for your review. We have a question about one of your comments, which we are re-posting in case it got lost in our response above:\\n\\nWith regards to the requested comparison between time-series centric models and generic architectures, could you give us a specific model that you have in mind? The exact comparison you are requesting is unclear to us.\"}",
"{\"title\": \"Author response\", \"comment\": \"Dear Reviewer 38tw,\\n\\nThank you for your review! A single fine-tuned language-model (LM) for timeseries regression would be appealing. However, we can justify our choice to instead use two separate components for encoding text and performing timeseries regression.\\n\\n* **[Limited context window size of LMs]:** A single simulation for a complex system may run for thousands of time steps. For example, the building surrogate models in our experiments predict hourly energy consumption for 1 year (T= 8,760). This hinders the use of language models\\u2014BERT has a context window size of 512, and even alternative models such as LongFormer and Llama-2-7b have a max context window size of 4,096. Of course, there exist advanced tricks to aggregate multiple embeddings computed for a single document where the document length exceeds the context window. We believe the RNNs and State Space Models (SSMs) we use offer a simpler solution as they have no such limitations on the sequence length.\\n* **[Computational expense of LMs]:** The quadratic complexity of the self-attention mechanism in Transformer-based language models, as well as the high number of parameters in LLMs such as Llama-2-7b, makes LMs poor choices for surrogate models of complex systems in practice. For a concrete example, to conduct the sensitivity analysis case study (Figure 4), we perform 960K model predictions, which took us ~1 hour on 1 NVIDIA A100-40GB GPU. If we generously assume that Llama-2-7b takes 10 seconds to generate 8,760 tokens (this is assuming a generation speed of 876 tokens/sec), this case study would have taken 9.6M seconds, or ~111 days, to run. Our multi-component approach helps make our framework ready to be used by practitioners. \\n* **[Our two-step process vs. one-step]:** In our framework, at train time the first step is to use an LLM to generate synthetic natural language SysCaps. The second step is to encode the text caption for multimodal timeseries regression. The first step is only used at training time---at test time, a person can interact with the multimodal surrogate model and prompt it with a text description. While the idea of using a single LLM to directly perform timeseries regression is intuitive, as previously stated, we intentionally want to avoid using an LLM as a text encoder due to the computational burden. Moreover, an additional advantage of our approach is in its modularity. We can decide to swap out the Llama-2-7b LLM for a more powerful LLM such as Claude or GPT-4 to generate training captions. This is not so easy if the LLM is also fine-tuned for timeseries regression. Likewise, we can swap out the lightweight text encoder model for a more powerful one. Our experiments comparing DistillBERT and BERT (Table 1) show that larger, more powerful text encoders lead to better regression accuracy. \\n\\nFor the question about a comparison between time-series centric models and generic architectures, could you give us a specific model that you have in mind? The exact comparison you are requesting is unclear to us.\"}",
"{\"metareview\": \"Reviewers liked that the paper solves the well-motivated problem of interacting with complex energy system (CES) using natural language and the strength of the results. They did not evaluate much of the technical aspects and provided lower confidence reviews: 38tw: could use BERT to directly predict the time series. 2gzL: unclear if fine-tuned or not. Gxzq: how LightGBM was used as baseline is not clear. It is clear that the paper focuses on specialized area that could be important, but also very few reviewers understand. This is also out of the domain for me, so I defer to reviewers on the importance of the problem and recommend accept to err on the positive side.\", \"additional_comments_on_reviewer_discussion\": \"Not a typical ML paper and the system did not match to the correct area if there are enough people in the area.\\nThe review completion was was low, with only 3/6 reviewers submitting.\"}",
"{\"title\": \"Reply, updated manuscript\", \"comment\": \"Dear Reviewer 2gzl,\\n\\nThanks for the clarification for question 4. In the paper you shared, there is a binary notion of correctness for the LLM-generated caption, which is used to filter out low-quality captions for fine-tuning. Our setting is slightly different and more complex---we train *multi-class classifiers* to assess the correctness of the captions holistically in terms of all system attributes. While we agree that it is interesting to ask whether performance can be improved by training on only high-quality captions, we believe that determining how to do this properly for our setting deserves careful consideration and the experimentation involved will take more time than what the rebuttal period allows for.\", \"we_have_updated_the_manuscript_with_the_following\": [\"[W1] Clarified in the introduction that we fine-tune the language embeddings (Line 69).\", \"[W2] We trained SysCaps-nl with BERT without fine-tuning (pretrained only), and the results were poor (Stock-Annual NRMSE = 0.356) (Lines 419-420). We also tried training a version of our best buildings model, SysCaps-kv, using RoBERTa-large (sup-simcse-roberta-large), a text encoder which is more similar to recent \\\"SOTA\\\" text embedding approaches. The results are unsurprisingly comparable to BERT. `Sup-SimCSE RoBERTa-Large - Buildings-Hourly NRMSE = 0.488` vs. `BERT - Buildings-Hourly NRMSE = 0.450` and `Sup-SimCSE RoBERTa-Large - Stock-Annual NRMSE = 0.027` vs. `BERT - Stock-Annual NRMSE = 0.020`.\", \"[W3] Added multi-class classifier hyper-parameters to the appendix (Lines 956-958).\", \"[W4] We trained a SysCaps-nl model without prompt augmentation on the wind farm dataset and added this result to the PDF. The performance slightly decreases, as expected (from an NRMSE of 0.035 with prompt augmentation to an NRMSE of 0.038 without augmentation). Note that before, the NRMSE of the SysCaps-nl model was 0.036, but we discovered that this was computed with prompt augmentation still enabled.\", \"[W5] We added \\u201cIt is natural to expect that non-experts may benefit more from our approach if the LLM is also instructed to simplify the simulator metadata or to provide explanations of technical concepts. Conducting interactive evaluations with non-experts will be important to obtain feedback for further improving the approach.\\u201d (Lines 532-535) and \\u201cWe did not conduct user studies in this work, as we first aimed to establish technical feasibility of this surrogate modeling approach\\u201d (Lines 536-538).\", \"[Q3] We added more qualitative examples to the appendix (Appendix Figures 12 and 13) that visualize the diversity in text produced by the LLM when creating synthetic SysCaps, along with the model predictions, for specific building and wind farm test examples.\"]}",
"{\"comment\": \"Clarification for question 4: yes I was trying to reason if using a threshold to filter lower-quality captions. Recent work by [Singh et. al](https://arxiv.org/abs/2407.10657). shows that synthetically generated LLM captions harm smaller open-source LLM models more than larger LLMs. In the given case, since the authors identify lower-quality captions using some criteria, it would be insightful to see the performance of the high-quality subset.\"}",
"{\"comment\": \"I accept the authors arguments and so, while I still keep my score for the level of contribution, I have now changed my global assessment to 8 at the risk of erring on the positive rather than on the negative side.\"}",
"{\"comment\": \"I thank the authors for their clarifications and newer ablations conducted during the rebuttal phase. I have increased my score to reflect this.\\n\\nOverall, it is interesting to see the use of LLMs in modeling systems that can model captions and take account of time series data in a multimodal manner. Future improvements in this line of work would be useful.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Dear Reviewer 2gzL,\\n\\nThank you for increasing the rating to 8. We are glad that our response has addressed your questions. We would be happy to assist if you have any follow-up questions or wish to discuss further.\\n\\nWe appreciate the time and effort you have dedicated to reviewing our paper and engaging in discussions.\\n\\n-Authors\"}",
"{\"summary\": \"This paper studies the use of natural language captions as inputs to surrogate models that simulate \\\"complex energy systems\\\". These natural language captions describe the features of the system being simulated. The task is to predict a timeseries of some variable of interest that depends on these features and some other independent variable that is fed as a time series. The paper introduces an architecture that fuses the textual description with the time series data to achieve this goal.\\nThe viability of the approach and its robustness to out-of-distribution perturbations are validated with a relatively extensive empirical evaluation, including different ablations of the system (such as one-hot encoding of the features, or no features), variations on the caption lengths or replacing words with synonyms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"S1: Provides extensive empirical evaluation of the proposed system\", \"S2: The presentation is clear.\"], \"weaknesses\": [\"W1: The LightGBM baseline is underspecified. This baseline is the only one that stands as a reference point that is not an ablated version of the proposed model. However, as I understand it, LightGBM is a framework but not necessarily a model, so I don't really to which model this system is really being compared against.\", \"W2: Not very clear what is the added value of the proposal of using LLMs against simply using a template-based natural language description.\", \"W3: Despite the system is motivated on the potential intuitiveness of language interfaces to non-experts, no particular study is conducted on that front.\"], \"questions\": [\"Q1) What's the advantage of the proposed approach using LLMs over more traditional template-based natural language captions? (e.g. \\\"The building is <x> squared feet.\\\", etc.)\", \"Q2) In Figure 1, the key-value template has only a colon to separate the key and the value. Have you tried adding a space in between? I expect\", \"Q3) For the one-hot encodings, how do you deal with numeric inputs?\", \"Q4) In the results in Table 3, why did you expect longer captions to have larger error? I would have had the opposite intuition as shorter captions are more likely to miss important attributes.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Updated manuscript\", \"comment\": \"Dear Reviewer Gxzq,\", \"we_have_updated_the_manuscript_to_reflect_the_following\": [\"[W1] Specified our use of the GBDT model for LightGBM (Line 370)\", \"[W3] Added \\u201cIt is natural to expect that non-experts may benefit more from our approach if the LLM is also instructed to simplify the simulator metadata or to provide explanations of technical concepts. Conducting interactive evaluations with non-experts will be important to obtain feedback for further improving the approach.\\u201d (Lines 532-535) and \\u201cWe did not conduct user studies in this work, as we first aimed to establish technical feasibility of this surrogate modeling approach\\u201d (Lines 536-538).\", \"[Q4] Added qualitative example comparing the short, medium, and long captions (Appendix, Figure 12). We also provide a new qualitative example for the wind farm experiment showing the various caption style augmentations (Figure 13).\"]}",
"{\"title\": \"Thank you\", \"comment\": \"Dear Reviewer Gxzq,\\n\\nThank you for increasing the rating to 8. We sincerely appreciate the time and effort you have dedicated to reviewing our paper and engaging in discussions.\\n\\n-Authors\"}",
"{\"title\": \"Question about Q2\", \"comment\": \"Dear reviewer Gxzq,\\n\\nIt looks like your question Q2 got cut off - \\\"Q2) In Figure 1, the key-value template has only a colon to separate the key and the value. Have you tried adding a space in between? **I expect...**\\\". Can you provide the rest of this question? \\n\\nThank you!\"}",
"{\"title\": \"Author response\", \"comment\": [\"Dear Reviewer Gxzq,\", \"Thank you for your review!\", \"**[W1: LightGBM]:** We are using the gradient-boosted decision tree (gbdt) model. We used 1000 estimators. We conducted a grid search over the learning rate, number of leaves, subsampling rate, feature fraction, and min number of data in leaf parameters. The ranges and best values selected based on this hyperparameter sweep are provided in Tables 6 and 7 in the Appendix. We will improve our presentation here in the main text in an updated version of the PDF.\", \"**[W2: LLM-generated captions vs. templates]:** Our work considers the use of both a) templated system captions (the colon-separated attribute names and values) and b) \\u201cconversational\\u201d-style natural language captions generated by an LLM. Some advantages of (b) include: the ability to use the LLM to augment the training data via prompt augmentation (see Section 6.5), the ability to leverage the LLM to invent descriptions for you when trying to come up with a description of a complex system or data (see [1] for a recent example trying to describe a smoke buoyancy simulation), and the potential to use the LLM to rephrase attribute names or values that may appear inscrutable to a non-expert (see our response [W5 - non-experts] to Reviewer 2gzl about this). We highlight that unconstrained, free-form text captions have been used extensively for text-audio and text-music multimodal modeling [2]. This is useful when, for example, a user wishes to search for a song but can only vaguely describe it. We draw an analogy to the application of complex system design\\u2014in early stages, an engineer may only have a vague idea about the characteristics that the final system will have. However, as mentioned below [W3], we leave conducting interactive user studies for future work.\", \"**[W3 - user evals]:** The reviewer is correct that we did not conduct a study to understand how non-expert users perceive our method. Our empirical evaluation is guided by the question of whether templated captions and LLM-generated captions can achieve good regression performance on real-world systems. The focus of this paper is thus on first establishing the technical feasibility of this general approach. We argue that previous work does not provide a conclusive answer to this question, and that our work confirms its feasibility. We will add a discussion in the revised PDF about the importance of conducting user studies to quantify how non-experts perceive language-augmented surrogate models.\", \"**[Q1]:** See response W2.\", \"**[Q2 - tokenization]:** We did not try adding a space instead of a colon in-between the key and value. We verified that the `bert-base-uncased` tokenizer has no issue tokenizing the colon separately. We provide an example at the bottom of the comment.\", \"**[Q3 - numeric vars]:** The building and wind farm simulators have numeric attributes that are \\u201cbucketed\\u201d by design, i.e., they only take on a fixed number of values. For example, the building square footage attribute only takes on 10 different values. For both datasets, the one-hot encoding baselines all use sklearn\\u2019s OneHotEncoder to create one-hot vectors out of all attributes.\", \"**[Q4 - short vs. long]:** We expected \\u201clong\\u201d captions to have larger error because the BERT text encoder is fine-tuned on captions of \\u201cmedium\\u201d length, and Transformers are known to have difficulty with generalizing to longer sequences than seen during training. We could potentially improve generalization from \\u201cmedium\\u201d to \\u201clong\\u201d captions by, for example, using a text encoder with a more advanced position encoding strategy that is less sensitive to the input sequence length, but we leave this exploration for future work. We will add more qualitative examples of short, medium, and long building captions to the appendix to help convey these intuitions better.\"], \"references\": \"[1] Zhou, Anthony, et al. \\\"Text2PDE: Latent Diffusion Models for Accessible Physics Simulation.\\\" arXiv preprint arXiv:2410.01153 (2024). \\n[2] Huang, Qingqing, et al. \\\"Mulan: A joint embedding of music audio and natural language.\\\" arXiv preprint arXiv:2208.12415 (2022).\", \"example\": \"```python\\n>>> x = np.load('10_cap_ids.npy')\\n>>> tokenizer.decode(x)\\n'[CLS] building _ subtype : none | building _ type : smalloffice | number _ of _ stories : 1. 0 | sqft : 7500. 0 | hvac _ system _ type : psz - ac with no heat | weekday _ operating _ hours : 9. 25 | weekday _ opening _ time : 8. 5 | weekend _ operating _ hours : 8. 5 | weekend _ opening _ time : 6. 75 | tstat _ clg _ delta _ f : 0. 0 | tstat _ clg _ sp _ f : 75. 0 | tstat _ htg _ delta _ f : 6. 0 | tstat _ htg _ sp _ f : 68. 0 [SEP]'\\n```\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you for contributing your time to reviewing and discussing our paper. We are glad to hear that your assessment is that our paper **\\\"makes a meaningful contribution and is worthy of publication\\\"**. We understand and sympathize with the limitations of the score scale and your hesitation to increase your score.\\n\\nHowever, we kindly ask you to raise your score, since, after discussion, the assessment is that the paper makes a meaningful contribution and is worthy of publication. Due to the limited score scale, a paper that is \\\"particularly innovative and high-impact\\\" would likely receive a score *higher* than 8. The act of raising your score post-rebuttal would also help confirm that \\\"the paper has been significantly strengthened by the authors during this discussion by addressing many of the reviewers' concern\\\". \\n\\nAgain, we thank you for taking the time to read our paper and engage in discussions!\"}",
"{\"summary\": \"The paper discusses the important challenge of building surrogate models for the prediction of simulation data. They specifically motivate the problem for complex energy systems(CES). These surrogate models often model system features as one hot vectors. The authors propose using text based descriptions to model these so-called surrogate systems with time series data. The text data is encoded as a dense embedding obtained from language models. The embedding is then fed to a bidirectional sequence encoder along with the time series data.\\n\\nThe paper discusses the generation of the text pertaining to the attributes of such systems and proposes an automatic evaluation strategy for the same. \\n\\nFor generating the captions the authors prompt an LLM with an in-context learning-based prompt that tunes the style and number of sentences. To evaluate the SysCap quality the authors train a multi-class classifier to check the attributes covered in the description generated by the LLM, using the text embedding. \\n\\nThe authors show how including SysCaps along with time series data leads to improved performance against baselines that perform onehot encoding over attributes. The authors further show how training a custom embedding model can aid in improving time series prediction over just using a time series-based model. They further empirically prove how the embeddings are more robust to synonyms and missing data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and easy to follow. The authors motivate the problem well and empirically show improvements over 2 real-world datasets. Further, SysCaps can be used by non-expert users to understand the features of surrogate systems. The Design space exploration is insightful to show the features learned by the model.\", \"weaknesses\": \"1. The paper claims the technique uses a pretrained text encoder for generating the embeddings, but then in section 5 mentions that the models are actually finetuned. This should be explicitly mentioned in the claims that the paper makes rather than just mentioning that a pretrained embedding is used.\\n2. Further, the authors do not compare with the \\\"said-pretrained\\\" embeddings but only finetuned embeddings, and other SOTA embedding models for text encoding. \\n3. The paper also claims that they train a system to evaluate the caption quality, the parameters of the said multiclass classifier are omitted from the paper.\\n4. The paper claims that for the CES building energy consumption dataset, the SysCaps-kv configuration works best, and for the turbine configuration the SysCaps-nl, there should be some discussion regarding the insights drawn from both cases and why the performance for both techniques are different. \\n5. The authors claim that SysCaps would be useful for non-expert users, but lack the discussion if LLM-based explanations (complementary to the work done) can also aid in explaining the system attributes for surrogate models.\", \"questions\": \"In addition to the points in the weakness:\\n\\n1. Did the authors try to just templatize the sentences rather than generating them using an LLM, how would that impact performance (i.e. rather than telling an LLM to adhere to some constraint-based template, just have a sketch sentence and fill attribute values in the given sentence)?\\n2. Why wasn't RFE performed for the Wind Farm Wake modeling dataset, would performing RFE improve performance ?\\n3. Would the model not further improve if the SysCaps were generated using synonyms for the attributes, did the authors see the LLM generate different synonyms for the building or wind farm dataset? \\n4. Do the authors believe that training on the subset of data where the caption quality assessed by the classifier model, would improve the overall model performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Oh, sorry about that! Here's the rest of the question:\\n\\nQ2) In Figure 1, the key-value template has only a colon to separate the key and the value. Have you tried adding a space in between? I expect that some tokenizers could not appropriately segment the key, separator, and value as independent token sequences when they have no space in-between, leading to degraded performance.\"}"
]
} |
1Qq62mo8TW | AFFS: Adaptive Fast Frequency Selection Algorithm for Deep Learning Feature Extraction | [
"Zilong He",
"Kun Xie",
"Xiaocan Li",
"Jigang Wen",
"Jiannong Cao",
"Gaogang Xie",
"LiangWei",
"Kenli Li"
] | As deep learning(DL) advances, effective feature extraction from big data remains critical for enhancing DL model's performance. This paper proposes a method for feature extraction in the frequency domain, utilizing advantages such as concentrated signal energy and pronounced data features. However, existing frequency component selection algorithms face challenges like difficulty adapting to diverse tasks and achieving only locally optimal results with extended processing times. To address these challenges, we introduce the Adaptive Fast Frequency Selection (AFFS) algorithm, tailored for various subsequent tasks. AFFS incorporates a frequency component selection factor layer, integrating it with the subsequent DL model to select globally optimal frequency component combinations for the DL model. Additionally, we propose a fast selection algorithm to expedite the process, leveraging the experimental observation of rapid convergence of selection factor ranking. Experimental results demonstrate that AFFS achieves superior performance across three datasets and three DL models. By using AFFS to select appropriate frequency components, even though our input data size is only 10\% of the original frequency feature, the classification accuracy of the model is improved by about 1\%. Furthermore, the early stopping mechanism can shorten the selection process by approximately 80\%. | [
"Discrete Cosine Transform",
"feature extraction",
"frequency domain",
"frequency components selection."
] | https://openreview.net/pdf?id=1Qq62mo8TW | https://openreview.net/forum?id=1Qq62mo8TW | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"KuhKXP1efS"
],
"note_type": [
"comment"
],
"note_created": [
1729019425807
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"desk_reject_comments\": \"For using smaller margin and narrower linespace to squeeze in more contents in 10 pages.\", \"title\": \"Submission Desk Rejected by Program Chairs\"}"
]
} |
|
1Qpt43cqhg | Fully-inductive Node Classification on Arbitrary Graphs | [
"Jianan Zhao",
"Zhaocheng Zhu",
"Mikhail Galkin",
"Hesham Mostafa",
"Michael M. Bronstein",
"Jian Tang"
] | One fundamental challenge in graph machine learning is generalizing to new graphs. Many existing methods following the inductive setup can generalize to test graphs with new structures, but assuming the feature and label spaces remain the same as the training ones.
This paper introduces a fully-inductive setup, where models should perform inference on arbitrary test graphs with new structures, feature and label spaces. We propose GraphAny as the first attempt at this challenging setup. GraphAny models inference on a new graph as an analytical solution to a LinearGNN, which can be naturally applied to graphs with any feature and label spaces. To further build a stronger model with learning capacity, we fuse multiple LinearGNN predictions with learned inductive attention scores. Specifically, the attention module is carefully parameterized as a function of the entropy-normalized distance features between pairs of LinearGNN predictions to ensure generalization to new graphs. Empirically, GraphAny trained on a single Wisconsin dataset with only 120 labeled nodes can generalize to 30 new graphs with an average accuracy of 67.26%, surpassing not only all inductive baselines, but also strong transductive methods trained separately on each of the 30 test graphs. | [
"node classification",
"inductive generalization"
] | Accept (Poster) | https://openreview.net/pdf?id=1Qpt43cqhg | https://openreview.net/forum?id=1Qpt43cqhg | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zcBuvhywPp",
"vamTV0qp4Z",
"vEn0M376K1",
"n9QoaXxZj2",
"me14EZleof",
"m7HbnuBsg8",
"iOPHIWHMpw",
"fterQVNkeD",
"ZOzDkB3CIe",
"W5fmEbV7cx",
"TgU7swttep",
"TcN3NbJ6sT",
"RQpvmqtlGx",
"IwyEWtfHt7",
"IDD3hpoISM",
"HGNC3SQSgY",
"9xHrw8DjQV",
"9pqMRjPEPC",
"8wHnb1Jr0K",
"2gxUVuBD1J",
"1LI99hYgy1"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment"
],
"note_created": [
1737523695453,
1730430218829,
1732239357659,
1729763134126,
1730664392618,
1732241330266,
1732478740820,
1732423079948,
1732240765895,
1732460634837,
1732241545231,
1732240411798,
1732457212726,
1730714046229,
1732365982050,
1732544130881,
1732494023846,
1732299336598,
1732511102625,
1733931540561,
1732263311643
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_GtqS"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_SumM"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_o9Mt"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_o9Mt"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_eX9V"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_SumM"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_GtqS"
],
[
"ICLR.cc/2025/Conference/Submission5269/Area_Chair_Yhk1"
],
[
"ICLR.cc/2025/Conference/Submission5269/Reviewer_SumM"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"The authors focus on addressing a challenging problem: enabling GNNs to be fully-inductive across diverse datasets. They propose a model called GraphAny. Specifically, the authors employed multiple encoders (LinearGNNs) whose parameters can be obtained analytically, allowing it to generalize across datasets with different feature and label spaces. Additionally, the authors design an attention-based, learnable MLP to capture transferable graph patterns. Extensive experiments demonstrate the model's effectiveness.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"S1. The authors are ambitious, tackling a highly challenging and valuable problem: designing a foundational GNN model that can generalize across diverse datasets.\\n\\nS2. The proposed method is ingenious. The authors introduce a LinearGNN that does not require training, enabling the model to adapt to different datasets.\\n\\nS3. The experimental results are powerful and impressive. \\n\\nS4. The authors provide the complete code, along with a well-organized README file, to support their views.\", \"weaknesses\": \"W1. In fact, the proposed LinearGNNs seem to me more like a data preprocessing method which requires no learning to unify the feature and label spaces through analytical solutions.\\n\\nW2. Regarding W1, the authors\\u2019 statement in the Introduction that GraphAny is the first fully-inductive method seems somewhat over-claimed. According to the views in this paper, any model that can be solved analytically (i.e., without training) could also be seem as fully-inductive. Nonetheless, this point does not negate the contribution of the attention-based component to knowledge transfer.\\n\\nW3. The paper does not mention some recent methods capable of achieving the fully-inductive as described, such as GraphControl [1]. \\n\\nW4. I suggest that the author should provide the data split ratio for downstream test datasets (it is vaguely mentioned only in the appendix). This is a crucial setting, as if my understanding is correct, the proposed method requires a certain amount of ground-truth labels to analytically solve the parameters of LinearGNNs on test datasets.\\n\\nW5. Based on W4, the approach in this paper seems to be semi-supervised (or fine-tuned) on downstream tasks, meaning it has access to the same amount of labeled data as other semi-supervised algorithms like GCN. Moreover, GraphAny benefits from additional prior knowledge from other datasets (i.e., the pre-training phase), making it seemingly more advantageous compared to other algorithms in experimental settings. This stands in contrast to the authors' claim that other semi-supervised algorithms have an additional advantage over GraphAny in the experimental settings.\\n\\nIf LinearGNNs do not require any test dataset labels to solve the parameters (i.e. completely zero-shot scenario), then W4 and W5 would not hold. I strongly recommend that the authors add further explanations in the paper to improve reader comprehension.\\n\\n[1] GraphControl: Adding Conditional Control to Universal Graph Pre-trained Models for Graph Domain Transfer Learning, WWW24.\", \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for the constructive feedback. Below we would like to comment on the identified weaknesses.\\n\\n**W1. Why can GraphAny transfer to unseen graphs?**\\n\\nAbove all, **the proposed distance features capture the node-level message-passing pattern** and infer the optimal attention for fusing LinearGNN predictions. Here we give an intuitive example: We know that LinearSGC2 is a good prediction channel for homophilic graphs and MLP is a good channel for heterophilic graphs (Figure 7 left). Hence, for an unseen graph, a logical guess would be to assign higher attention weights to LinearSGC2 if the graph displays homophilic properties. From Figure 5, we observe that homophilic and heterophilic information can be inferred from the entropy-normalized features (bottom row), where homophilic graphs like Cora, Arxiv, and FullCora exhibit similar patterns. These message-passing patterns are leveraged by GraphAny to predict inductive attention scores.\\n\\nMoreover, it is important to note that our **inductive attention operates at the node level**, as specified in Equation 5. This means that even given an unseen graph with new structure/feature/label spaces, as long as the node of interest exhibits distance feature patterns akin to those of a node in a previously observed graph, GraphAny will likely assign similar attention weights. Luckily, even when training on one small graph, say Wisconsin with 120 nodes, there might be sufficiently diverse node-level message passing patterns, as observed in [Luan et al., 2022]. This enables the remarkable transferability of GraphAny to transfer to unseen graphs even trained on one dataset.\\n\\n\\n**W2. Insights on why GraphAny fails on some datasets and how to improve it.**\\n\\nThe reason why GraphAny doesn\\u2019t outperform transductive baselines on some datasets is that GraphAny is evaluated in a more challenging inductive setting. Specifically, the transductive models have additional advantages compared with GraphAny by leveraging unobserved validation sets of inductive data to optimize parameters when training the model (e.g. optimizer, learning rate, dropout ratio, number of epochs) and tuning hyperparameters (e.g. selecting the depth of graph convolutions).\\n\\nSecond, one limitation of GraphAny is that to achieve training-free inference, we leverage LinearGNNs with analytical solutions but with limited expressivity. As a consequence, the performance is bounded by the linear combination of those LinearGNNs. There are several straightforward solutions to improve the performance of GraphAny, though all at the cost of breaking the fully-inductive assumption. One possible solution is to relax this constraint and train transductive non-linear models first and learn how to combine their predictions using the interactions between them. Although sacrificing the training-free property of GraphAny, this should give a better transductive performance, especially for those datasets with complex non-linear patterns. \\n \\n**W3. Comparison against strong baselines that consider both homophily and heterophily.**\\nFollowing the reviewer\\u2019s suggestion, we included the required ACM-GNN [Luan et al., 2022] as a baseline due to its good performance on both homophilic and heterophilic graphs. However, ACM-GNN faces scalability issues that prevent its application to large graphs. Using the [authors\\u2019 implementation](https://github.com/SitaoLuan/ACM-GNN/tree/main/ACM-Pytorch/), we encountered out-of-memory for a GPU of 40GB on four large datasets: Questions, Reddit, Arxiv, and Product. This prevents us from adding ACM-GNN to our main table. Hence, we evaluated ACM-GNN on the remaining 27 graphs, with results reported in Table 6.\", \"our_observations_are_as_follows\": \"Tuning ACM-GNN is highly time-consuming, requiring 672 GPU hours on 27 graphs, while GraphAny requires only 4 GPU hours (**168\\u00d7 more efficient**), showcasing its efficiency and the advantage of inductive inference. In terms of performance, GraphAny outperforms ACM-SGC and is only slightly (1-2\\\\%) below ACM-GCN. However, this slight difference is not a significant disadvantage for GraphAny, given the unfair advantage transductive models have by leveraging the inductive validation sets for parameter and hyperparameter tuning, as well as the substantial difference in runtime.\"}",
"{\"summary\": \"The paper introduces GraphAny, a model designed for fully-inductive graph learning, where models must infer on new graphs with varying structures, features, and labels. GraphAny leverages LinearGNN for analytical graph inference, adaptable to diverse graph types. By integrating multiple LinearGNN predictions using learned inductive attention, GraphAny ensures robust generalization to new graphs. Empirical results demonstrate GraphAny's effectiveness, achieving a 67.26% average accuracy on 30 new graphs with minimal training data, outperforming both inductive and transductive baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. this paper raises a more general and challenging task for graph ood, that is the fully inductive node classification, which requires the model can generalize to arbitrary graphs, involving new structures, new dimensions, and semantics for their feature and label spaces.\\n\\n2. this paper designs a novel method GraphAny, that integrates the multiple LinearGNN predictions and learned inductive attention, which satisfies the permutation invariant and robust to dimension changes\\n\\n3. the paper gives comprehensive experiments and evaluation of various datasets, showing the effectiveness of their methods.\", \"weaknesses\": \"1. The lack of baseline. This paper seems only to compare with the test-adapted GNN models as the baselines (GCN, GAT, MLP), I am not very certain if any other GNN baselines trained on the one dataset while generalizing to more datasets, such as the LLM-based GFM[1].\\n\\n2. Since your method is based on the combination of 5 different linearGNNs ($ F = X$ (Linear), $F = AX$ (LinearSGC1), $F = A^2X $(LinearSGC2), $F = (I \\u2212 A)X$ (LinearHGC1) and $F = (I \\u2212 A)^2X$ (LinearHGC2) ), have you ever compared your method with the random coefficients combination of them? I suggest comparing GraphAny to a baseline that uses random or fixed coefficients to combine the 5 LinearGNN components. This would help isolate the benefit of the learned inductive attention mechanism.\\n\\n[1] One for All: Towards Training One Graph Model for All Classification Tasks\", \"questions\": \"1. Could you compare your method with the random coefficients combination of 5 different linearGNNs?\\n\\n2. According to your Table 2 and Figure 7, it seems that SGC1 and SGC2 occupy a dominant position( high weight and high accuracy). Could you discuss why this happens more? Could you analyze why SGC1 and SGC2 tend to get higher weights and accuracy? Does this suggest that simpler graph convolutions are more transferable? How might this insight inform future designs of inductive graph models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper tackles the issue of fully inductive graph learning by introducing GraphAny. The proposed method consists of LinearGNN to preprocess the features following the idea of SGC and attention module to transform the feature.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper tackles a great challenging fully-inductive graph learning task.\\n2. This paper introduces an inductive attention module that satisfies permutation invariance properties and generalizes to new graphs.\", \"weaknesses\": \"1. The presentation of this paper needs improvement. Many details are missing in the section of methodology.\\n- The authors conduct the experimental on motivating the entropy normalization, while the experimental setup in figure 5 is not explicit. It's not suggested to specify what these methods are until the section 4.1. The authors should provide more explicit explanation of the experimental setup for Figure 5.\\n- It's not clear what is the learnable parameters in the attention module and how to get the attention vector $\\\\alpha$. A clear description of the learnable parameters in the attention module should be added.\\n- It's weird to call $y_u^{(i)}$ in equations 9 and 10 as node feature and it's more proper to describe it as label vector considering its dimensionality. \\n\\n2. In figure 3, the authors mention that LinearGNN is non-parametric, but LinearGNN involves the learnable weight matrix W in equation 1. It's improper to claim that LInearGNN is a non-parametric solution. The authors should revise their description of LinearGNN to avoid confusion.\\n\\n3. This paper mentions that it is always possible to cheat the fully-inductive setup by training a separate instance of existing models for each test dataset (in Line 75). However, the proposed LinearGNN operates like what it just said by training a linear layer with a graph convolution operation for a test graph and the authors called this LinearGNN a non-parametric solution, or preprocessing step (in Table 1). It's hard to convince the readers that the proposed method is a fully-inductive graph learning method. \\n\\n4. Though the authors show that GraphAny has better average performance in total 31 graphs in Table 2. However, the experimental results in Table 5 shows that GAT outperforms GraphAny in 18 out of 31 graphs, which means that the proposed method does not have advantage in the fully inductive learning setting. In addition, GAT is a baseline proposed in 2018, and many recent methods can outperform GAT in most of these graphs. \\n\\n5. How does the different values of t influence the performance of GraphAny on different datasets? It's better to include an ablation study on the effect of t.\", \"questions\": \"1. How do you get the attention score in equation 5? Do you just sum all elements in matrix $P_u^{i}$ in equation 10? What is the learnable weight in the attention module as shown in figure 3?\\n\\n2. Can you further explain the experimental setting in figure 5? What does the density mean? Since the value is in the range of [0, 1], is this value normalized?\\n\\n3. This paper mentions that it is always possible to cheat the fully-inductive setup by training a separate instance of existing models for each test dataset (in Line 75). However, the proposed LinearGNN operates like what it just said by training a linear layer with a graph convolution operation for a test graph and the authors called this LinearGNN a non-parametric solution, or preprocessing step (in Table 1). It's hard to convince the readers that the proposed method is a fully-inductive graph learning method. Can the authors clearly differentiate your approach from the \\\"cheating\\\" setup?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for acknowledging the merits of our work. Below, we would like to comment on the identified weaknesses.\\n\\n**W1. LinearGNN is a data preprocessing method.**\\n\\nYes, your understanding is correct. We recognize LinearGNN as a fast and solvable solution for node classification, which enables GraphAny\\u2019s inductive inference without training. We don\\u2019t think this is a weakness, since GraphAny learns inductive attention scores to combine predictions from multiple LinearGNNs.\\n\\n**W2. Any model that can be solved analytically is fully-inductive. GraphAny is overclaimed.**\\n\\nFully-inductive not only implies inference on any graph, but also **generalization** to new graphs. We would like to highlight that LinearGNNs, and other GNNs with solvable weights actually yield a transductive function $f=\\\\mathbb{R}^{d} \\\\rightarrow \\\\mathbb{R}^c$ and cannot generalize to any new datasets with different feature and label spaces that require $f\\u2019=\\\\mathbb{R}^{d\\u2019} \\\\rightarrow \\\\mathbb{R}^{c\\u2019}$.\\n\\nBy contrast, the function $f_\\\\theta: \\\\mathbb{R}^{t(t-1)} \\\\rightarrow \\\\mathbb{R}^t$ learned by GraphAny operates on a fixed dimension input regardless of the dataset, and thereby generalizes to new datasets. Hence, GraphAny should be considered as fully-inductive.\\n\\n**W3. A recent work, GraphControl, is also fully-inductive.**\\n\\nThank you for bringing the recent related work on GraphControl to our attention. We have reviewed this work and acknowledge its relevance. However, GraphControl is still a transductive model with the pre-training-fine-tuning pipeline, which cannot transfer to new graphs with different label spaces. For example, the graph-prompt features and the output layer for prediction must be learned end-to-end for a new graph task. In contrast, we focus on the more challenging fully-inductive setting, where the model must generalize to unseen graphs *without additional training*. Therefore, the presence of this related work does not diminish the unique contribution of GraphAny. We\\u2019ve also updated the manuscript and discussed GraphControl in the related work session.\\n\\n**W4. Provide the data split ratios for test datasets.**\\n\\nWe thank the reviewer for the comment and updated the dataset split ratio in Table 3. It\\u2019s noteworthy to highlight that we did not tweak the dataset training ratio to make our results look better. We strictly follow the splits of the original data source (e.g. DGL and PyG) if they exist. For those that don\\u2019t provide splits, we follow the standard semi-supervised settings (20 labeled nodes per class for training and the same amount of data for valid and test sets).\\n\\n**W5. GraphAny has an advantage over the setting of semi-supervised learning, not in the opposite way.**\\n\\nWe respectfully disagree with your claim that GraphAny has an additional advantage over the semi-supervised training baselines. The ways where semi-supervised models and GraphAny use training labels are totally different: semi-supervised models use training labels to perform gradient descent, while GraphAny only uses training labels for inference, without changing any of its parameters. We emphasize that semi-supervised models have to train 31 separate models (with 31 different sets of hyperparameters) in order to perform inference on 31 datasets. By contrast, GraphAny only trains 1 model using 1 dataset and can perform inference on 31 datasets. Since semi-supervised models additionally leverage the training labels to tune parameters and the validation sets to search hyperparameters, they are supposed to have an advantage over GraphAny.\\n\\nBesides, your understanding of LinearGNNs is correct, they do rely on the training labels of the inductive datasets to perform training-free inference.\"}",
"{\"title\": \"Reply to Authors' Rebuttal\", \"comment\": \"Thank you for the detailed explanation and the additional experimental results. You have addressed my concerns. I will increase my score to 6.\"}",
"{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for your thoughtful feedback.\\nWe are glad our responses addressed your concerns.\"}",
"{\"title\": \"Rebuttal by Authors (cont.)\", \"comment\": \"Here, we discuss the proposed questions:\\n\\n**Q1. How to get the attention scores?**\\n\\nWe don\\u2019t sum up the $P_u^{t}$ in the Equation 10 to obtain the attention score. The $P_u(i,j)$ is a dimension of the entropy-normalized feature for the attention function. As explained in Section 3.3, The learnable weights of GraphAny is the inductive attention module$f_\\\\theta: \\\\mathbb{R}^{t(t-1)} \\\\rightarrow \\\\mathbb{R}^t$, which computes the attention score for fusing LinearGNN predictions based on their interactions. \\n\\n**Q2. Experimental settings in Figure 5.**\\n\\nThe experiments in Figure 5 illustrate the probability distribution of different features for inductive attention. Specifically, we compare the distributions between Euclidean distances (the first row) and entropy-normalized (the second row) features between five channels: $\\\\boldsymbol{F} = \\\\boldsymbol{X}$ (Linear), $\\\\boldsymbol{F} = \\\\bar{\\\\boldsymbol{A}} \\\\boldsymbol{X}$ (LinearSGC1), $\\\\boldsymbol{F}=\\\\bar{\\\\boldsymbol{A}}^2 \\\\boldsymbol{X}$ (LinearSGC2), $\\\\boldsymbol{F}=(\\\\\\\\boldsymbol{I}-\\\\bar{\\\\boldsymbol{A}}) \\\\boldsymbol{X}$ (LinearHGC1) and $\\\\boldsymbol{F} = (\\\\boldsymbol{I} - \\\\bar{\\\\boldsymbol{A}})^2 \\\\boldsymbol{X}$ (LinearHGC2) with $\\\\bar{\\\\boldsymbol{A}}$ denoting the row normalized adjaceny matrix. The density means the estimated density of the empirical distribution function based on the observed distribution of distances/features. The Euclidean distances are computed by features normalized to unit length, the entropy-normed features are normalized via dynamically determining the sigma for each node (check the explanation under equation 10).\\n\\n**Q3. Why transductive models are cheating?**\\n\\nPlease refer to our response to W3.\\n\\n\\n\\n**Reference**\\n\\n[Luan et al., 2023] Revisiting Heterophily For Graph Neural Networks. NeurIPS 2023.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer GtqS,\\n\\nThank you for your valuable comments and suggestions to improve our paper. In response to your feedback, we have clarified the differences between the proposed fully-inductive setting, analytical solutions, and transductive settings. Additionally, we have updated the manuscript to include a discussion on GraphControl and provided detailed explanations of the data split ratios.\\n\\nWe would appreciate it if you could confirm whether our responses have adequately addressed your concerns.\\n\\nWe look forward to your feedback and thank you for your time and consideration.\"}",
"{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for acknowledging the merits of our work. Below, we would like to comment on the identified weaknesses and answer your questions.\\n\\n**W1. Lack of baselines that generalize to more datasets.**\\n\\nWe agree with the reviewer that some LLM-based approaches can generalize to new datasets based on text descriptions. However, this paper aims to study a more fundamental setting of generalizing across datasets purely based on arbitrary continuous features and categorical labels. LLM-based approaches can\\u2019t fit into this challenging setting, since most datasets don\\u2019t have text descriptions in our experiments.\\n\\nWhile GCN and GAT seem to be not strong enough, we additionally consider stronger baselines that work well for both homophilic and heterophilic datasets, including ACM-SGC and ACM-GNN [Luan et al., 2022]. As shown in Table 6, GraphAny achieves better or comparable performance using much less computational resources (a detailed discussion is provided in Appendix E.2).\\n\\n\\n**W2 and Q1: Comparison of GraphAny against a fixed combination of 5 LinearGNN components.**\\n\\nWe thank the reviewer for the feedback. We have those results in Figure 9 actually (although not explicitly discussed). The transductive results at batch 0 (green line) are the requested random-initialization result, which are much worse than the final performance of GraphAny. \\n\\nBesides, we provide additional results in Figure 10, where GraphAny is compared with the baseline that averages all the predictions (denoted as MeanAgg). It is obvious that GraphAny consistently outperforms the baseline MeanAgg by a significant margin, demonstrating the necessity of learning the coefficients rather than using fixed ones.\\n\\n\\n**Q2: Any insight why LinearSGC1 and LinearSGC2 tend to get higher weights.**\\n\\nYour observation that LinearSGC1 and LinearSGC2 are the most effective graph convolution kernels is correct. This finding aligns with the standard homophily assumption, which suggests that connected nodes are likely to share similar labels, making simpler convolutional kernels like LinearSGC1 and LinearSGC2 particularly effective in such settings.\\n\\nRegarding transferability, we believe that simpler graph kernels might exhibit stronger inductive generalization due to introducing less inductive bias. This reduced bias allows these kernels to generalize better to new graphs. However, it is important to note that these simple kernels might also have less expressive power and, hence, weaker transductive performance, as they may not fully fit the complex distributions during training. Therefore, we believe that there exists a potential tradeoff for designing inductive graph models: one might need to balance transductive performance (specialized for a specific graph) and inductive generalization (transferability to new graphs).\\n\\n**Reference**\\n\\n[Luan et al., 2022] Revisiting Heterophily For Graph Neural Networks. NeuriPS 2022.\"}",
"{\"title\": \"Rebuttal by Authors\", \"comment\": \"We sincerely thank the reviewer for pointing out areas of improvement for our paper. Below, we address each of the identified weaknesses:\\n\\n**W1. Many methodology details are missing**\\n\\nW1.1 We sincerely thank the reviewer for the constructive feedback on the experimental settings of Figure 5. We\\u2019ve updated the caption of Figure 5 to make it clear and self-contained. \\n\\nW1.2 The attention vector $\\\\alpha$ is predicted by the attention module $f_\\\\theta$ based on the entropy-normalized features, as shown in the bottom right of Figure 3. In practice, we implement $f_\\\\theta$ as an MLP and all the learnable parameters are the parameters of that MLP, as stated in line 330-334.\\n\\nW1.3 We thank the reviewer for pointing this out. We call $\\\\hat{y}_u^{(i)}$ node features because they serve as input to our inductive attention module for node $u$. We understand this slightly abuses the term \\u201cnode features\\u201d used in node classification. To distinguish $\\\\hat{y}_u^{(i)}$ against $X$, we have revised to call $\\\\hat{y}_u^{(i)}$ \\u201cfeatures for inductive attention\\u201d.\\n\\n**W2: LinearGNNs are not non-parametric since they involve a learnable weight matrix W.**\\n\\nBy non-parametric, we refer to having no *learnable* parameters. Although LinearGNNs do have weights (which we solve analytically through the pseudoinverse), they are not learnable. Besides, it is noteworthy to point out that the LinearGNNs are still transductive models that do not transfer any knowledge across graphs, whereas GraphAny learns inductive functions of LinearGNNs predictions that can generalize to unseen graphs without additional training.\\n\\n**W3 and Q3: Why are transductive models cheating in the fully-inductive setting?**\\n\\nFor the standard transductive setting, separate models with **different sets of parameters** and hyperparameters are trained for different datasets. In fully-inductive settings, **one set of parameters** is trained for different datasets and generalize to new graph **without additional training**. That said, transductive models cannot perform fully-inductive inference in the first place as the input and output spaces vary. However, the transductive models can be viewed as strong baselines for inductive models: as they leverage unobserved validation sets of inductive data to optimize parameters when training the model (e.g. optimizer, learning rate, dropout ratio, number of epochs) and tuning hyperparameters (e.g. selecting the depth of graph convolutions).\\n\\nAs mentioned in Section 3.3 and our responses in W1.2, GraphAny is a fully-inductive model, which does not model the input and output spaces and learns to fuse different predictions. Once trained, the learned inductive attention $f_\\\\theta: \\\\mathbb{R}^{t(t-1)} \\\\rightarrow \\\\mathbb{R}^t$ is ready to generalize to arbitrary graphs.\\n\\n\\n**W4. GAT outperforms GraphAny on 18 out of 31 datasets.**\\nFirst, as we have mentioned in our response to W3, it is unfair to compare a fully-inductive model to a transductive model like GAT as transductive models are trained on known labeled nodes and leverage validation sets to select hyperparameters for each of the 31 datasets. GraphAny runs inference on all new unseen graphs outside its small training set (eg, training on Wisconsin and running inference on 30 other graphs). Nevertheless, even in the fully-inductive setting, GraphAny outperforms several transductive baselines.\\n\\nTo further ease your concerns, we added two recent baselines, ACM-SGC and ACM-GNN [Luan et al., 2022], which have good homophilic and heterophilic performance. As shown in Table 6, GraphAny achieves better or comparable performance using much less computational resources (a detailed discussion is provided in Appendix E.2).\\n\\n**W5. How does the different values of t influence the performance of GraphAny?**\\nWe thank the reviewer\\u2019s advice and have added the ablations on t in Appendix E.1. As shown in Figure 10, for different graph convolution operators, such as LinearGNNs, Chebyshev polynomials, and personalized page rank, increasing t has diverse effects. For LinearGNNs, increasing t and adding low-pass graph convolutions (LinearSGC1 and LinearSGC2) significantly enhance performance. In contrast, for Chebyshev graph convolutions, adding high-order convolutions reduces performance. For personalized PageRank, adding more local channels consistently improves results.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewer o9Mt,\\n\\nThank you for your valuable comments and suggestions for improving our paper. Following your feedback, we have explained the methodology details and clarified our statement regarding transductive models and LinearGNN. We have also updated our manuscript to make it more comprehendible. We would like to confirm whether our response has adequately addressed your concerns. \\n\\nWe look forward to your feedback.\"}",
"{\"summary\": \"This paper studies the problem of fully inductive node classification, where limited parameters are learned from one small graph, and inference other unseen graphs. The authors propose GraphAny, which consists of two components, one is a set of linearGNNs, and the other is a learnable attention MLP function. Using pseudo-inverse, LinearGNNs directly compute the node embeddings of corresponding linearGNN channels. Then a sophisticated attention technique which has properties of permutation-invariant and robust dimension generalization is used to combine these embeddings. The extensive experiments show that GraphAny gains significant improvements over the state-of-the-art methods in many datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"1. This paper proposes a novel problem setting seemingly impractical, and provides a reasonable solution to it. Previously, I was doubtful about the feasibility of graph foundation models, since unlike in NLP and CV, graph data is more universal and diverse. The information heterogeneity between different graphs may make this fully inductive setting impossible, i.e., I didn't think the knowledge in different graphs has much in common. However, the authors provide an impressive and valid solution to this problem, which is a good contribution to the community.\\n\\n2. The proposed method is well-motivated and well-designed. The attention module that tackles dimensionality and permutation issues is particularly novel and interesting, with strong intuition. \\n\\n3. The experiments are extensive and convincing. An impressive number (31) of datasets are involved in this fully-inductive setting, and the good average score of GraphAny demonstrates its effectiveness.\\n\\n4. The ablation study is comprehensive and insightful. The authors provide a clear understanding of the importance of each component in GraphAny.\", \"weaknesses\": \"1. (Explainability) I didn't see any explanation of one very important question: why could the knowledge learned from one graph be transferred to another unseen and unrelated graph? The authors should provide more intuitive insights on this point. From my point of view, LinearGNNs with different graph operations may serve as probes to extract different types of intrinsic knowledge from the graph, then the permutation and dimension invariant attention module could combine this knowledge in a semantic space where the common knowledge of graphs is shared. The authors should provide more insights on this point, i.e., why it works well.\\n\\n2. (Experiments) Although the proposed AnyGraph shows a high average performance, it is not the best in all datasets, especially in some large datasets such as Arxiv, Reddit and Products. I don't think homophily could explain this, since AnyGraph (Arxiv) also performs poorly. The authors could provide more insights on why AnyGraph fails in these datasets, and how to possibly improve it.\\n\\n3. (Experiments) The transductive baselines (GCN, GAT) are not strong enough. Since the benchmark contains so many datasets ranging from highly homophily to highly heterophily, baselines [1,2,3] that could fit both homophilous and heterophilous graphs should be compared. I highly recommend the authors to add some of these baselines to make the results more convincing.\\n\\n\\n[1] Luan, S., Hua, C., Lu, Q., Zhu, J., Zhao, M., Zhang, S., ... & Precup, D. (2022). Revisiting heterophily for graph neural networks. Advances in neural information processing systems, 35, 1362-1375.\\n\\n[2] Lim, D., Hohne, F., Li, X., Huang, S. L., Gupta, V., Bhalerao, O., & Lim, S. N. (2021). Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34, 20887-20902.\\n\\n[3] Zhu, J., Rossi, R. A., Rao, A., Mai, T., Lipka, N., Ahmed, N. K., & Koutra, D. (2021, May). Graph neural networks with heterophily. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 12, pp. 11168-11176).\", \"questions\": \"Most of the questions and suggestions are already mentioned in the weaknesses section. I would like to mention some minor points here.\\n\\n1. I would like to see more graph operations used in LinearGNN instead of just X, AX, A^2X, (I-A)X, (I-A)^2X. For example, the Chebyshev polynomial operation, the PageRank operation, normalized Laplacian operation, etc. I think more operations could provide more diverse perspectives of the graph, and thus improve the performance of GraphAny at a little extra cost.\\n\\n2. I doubt the time complexity in Table 1, since pseudo-inverse is used in LinearGNN, which is computationally expensive up to O(n^2d), could the authors explain this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your reply which explains my concerns. I think this paper is a really aggressive attempt in the graph field. I will keep my positive score.\"}",
"{\"title\": \"Response to Reviewer Feedback\", \"comment\": \"Thank you for your thoughtful and detailed feedback, as well as the time and effort you have dedicated to reviewing our work. We greatly appreciate your interest in GraphAny and the insights you have shared.\\n\\nYour understanding of how GraphAny is trained is indeed precise. To our understanding, the graph-specific (transductive) part that leverages labeled nodes to predict is essential due to the varying input and output spaces across graphs. However, GraphAny demonstrates that a transferable function\\u2014namely, the proposed inductive attention\\u2014can be successfully applied across graphs, which we view as a key contribution.\\n\\nWe would also like to clarify a point regarding GraphControl. While GraphControl can operate in few-shot scenarios, it cannot perform true zero-shot inference. As outlined in Algorithm 1 of the GraphControl paper, fine-tuning is still required for new datasets to learn the MLPs and output layer specific to each graph. Nonetheless, we acknowledge and appreciate your suggestion, and we have incorporated it to further enrich the related work section of our paper.\\n\\nThank you once again for your valuable feedback and for raising your score based on the addressed concerns. Your recognition of GraphAny\\u2019s innovations means a lot to us.\"}",
"{\"title\": \"Comment by Authors\", \"comment\": \"Thank you for your response and for updating your score!\\nWe are glad our responses addressed your concerns.\"}",
"{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you for your quick reply. We're glad to know that our responses have addressed some of your concerns.\\n\\n**Q1. Why is GraphAny significantly faster than ACM-GNN?**\\n\\nGraphAny's speed advantage over ACM-GNN can be attributed to several factors. Firstly, ACM-GNN is a transductive model that requires end-to-end training on each graph, resulting in the training of 31 models, each with its own set of parameters and hyperparameters. In contrast, GraphAny trains on a single graph with one consistent set of parameters/hyperparameters. Additionally, GraphAny enjoys better time complexity; its preprocessing occurs in $O(|\\\\mathcal{E}|)$ time, and each epoch processes in $O(|\\\\mathcal{V}_L|)$ time, as opposed to $O(|\\\\mathcal{E}|)$ for ACM-GNN.\\n\\n**Q2. Are there any baselines specialized for Arbitrary Graphs?**\\n\\nTo the best of our knowledge, we made the first attempt to address the fully-inductive learning problem on arbitrary graphs. Currently, GraphAny is the only model that supports inductive learning across arbitrary graphs.\"}",
"{\"comment\": \"Thanks for your response. I have carefully read the authors reply and other reviews. Besides, I have checked the code, as I am very interested in this paper. In my opinion, for each dataset, GraphAny needs to retrain multiple LinearGNNs using method like least squares, which makes this part non-transferable. However, the authors innovatively proposed the Inductive Attention component, which serves as a Fully-Inductive result selector. My main concerns has been addressed. Therefore, I have decided to raise my score from 6 to 8.\\n\\nWell, I still cannot fully agree with your claim that GraphAny is the first fully-inductive method. This is because GraphAny still requires some labeled data for downstream training (which is crucial for the quality of LinearGNNs as they are non-transferable). As I mentioned earlier, methods like GraphControl with appropriate prompts can be applied in few-shot scenarios (e.g., 3-shot or 5-shot) or even zero-shot, whereas GraphAny requires 20-shot or more. From the perspective of required labels for downstream tasks, GraphAny does not hold a clear advantage. Nonetheless, this does not diminish my recognition of the paper's significant innovation and valuable insights.\"}",
"{\"metareview\": \"This paper proposes a general graph neural network that can be applied to new graphs which may have different feature and label space. Authors show that the approach outperforms a wide range of methods. While some reviewers have concerns on the selection of baseline methods and a few experimental setups, authors were able to clarify in the rebuttal. Overall, reviewers agree that the paper presents an important algorithm and is technically sound.\", \"additional_comments_on_reviewer_discussion\": \"Some reviewers had questions on the selection of baseline methods and clarity of experiment setup, which were addressed during rebuttal. A the end, all reviewers are fairly positive and thus AC-reviewer discussion was not needed.\"}",
"{\"comment\": \"Thank you very much for your experiments and the explanation, which solves my concerns.\\n\\nI have two more questions about ACM-GNN. \\n\\n- You said in Appendix E.2 that tuning ACM-GNN requires 672GPU while GraphAny requires 4GPU hours. Why does ACM-GNN consume a lot of time? I understand that ACM-GNN is a quite complex model, is that the reason? \\n\\n- Also I know that ACM-GNN is not for Arbitrary Graphs, it is for heterophilic graphs. There is one thing I am more curious about, Are there any baselines specialized for Arbitrary Graphs? \\n\\nThank you again for your reply.\"}"
]
} |
1Qn1pMLYas | On the Cycle Consistency of Image-Text Mappings | [
"Caroline Chan",
"Hyojin Bahng",
"Fredo Durand",
"Phillip Isola"
] | The increasing exchange of image and text in large multimodal models leads us to ask: to what degree are mappings from text to image, and back, cycle-consistent? First, we find that current image-to-text models paired with text-to-image models do achieve a degree of perceptual cycle consistency, even when these models are not trained to have this effect. However, these mappings are far from perfect, motivating us to analyze in what ways they fail. First, we observe a strong correlation between cycle consistency and downstream performance in both image captioning and text-to-image generation. Next, we investigate how divergent are text-to-image mappings as a function of the number of objects described by the text, and how it affects achieving cycle consistency. Surprisingly, we find that more descriptive text leads to a a broader distribution of generated images, but also results in overall better reconstructions. Finally, we show possible challenges of training cycle consistent models due to the sensitivity of text-to-image models. | [
"cycle consistency",
"multimodal learning",
"vision-language modeling",
"text-to-image generation",
"synthetic data"
] | Reject | https://openreview.net/pdf?id=1Qn1pMLYas | https://openreview.net/forum?id=1Qn1pMLYas | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ugXEi9d8gn",
"n1U57p4EWz",
"lMXU0UW4gr",
"gM7lN9ptfN",
"fQxIj26w4S",
"bTchXkBpnz",
"ZbuLtOyxHW",
"UCJCUmpydi",
"TWMImtnC7W",
"TCdOBblVQe",
"PSvqIB0Aek",
"O0fxCqpS20",
"LRQ5mMpEZQ",
"Ka1eURUT0s",
"EM1Elf2V0A",
"D0bXN3zItm",
"6YL3HikcgU",
"5wqUpfJMmk",
"5sYEoAodbf",
"4dvm0A6dQu",
"41YGuYJii2",
"1rtEwSdx1u"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732793010481,
1730701546512,
1732792994596,
1733185880491,
1737523951585,
1733285803600,
1732793238908,
1732793664858,
1730708350286,
1730284671472,
1732859405157,
1730590281336,
1732793167710,
1730234820781,
1734558576554,
1732792798001,
1733217588743,
1732794103589,
1732793761882,
1732793803846,
1732793534899,
1732793972611
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Reviewer_QcCd"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Reviewer_go1x"
],
[
"ICLR.cc/2025/Conference/Submission8962/Reviewer_oP89"
],
[
"ICLR.cc/2025/Conference/Submission8962/Reviewer_oP89"
],
[
"ICLR.cc/2025/Conference/Submission8962/Reviewer_31TL"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Reviewer_BHgf"
],
[
"ICLR.cc/2025/Conference/Submission8962/Area_Chair_nf9S"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Reviewer_31TL"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8962/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Individual response to go1x (2)\", \"comment\": \"### Q5. How can the proposed method avoid hallucination in I2T and compositionality problem in T2I?\\nWe clarify that we are **not proposing a new method**, but aim to provide an **empirical study on cycle consistency in image-text mappings**. We have updated Section 4.3 to study the relationship between object hallucination in text and cycle consistency. Contrary to the reviewer\\u2019s concern, we observe that cycle consistency strongly correlates with **reduced hallucination** in text (updated Figure 10 and 11, Section 4.3) and **improved compositionality** in images (Figure 6, Section 4.2).\\n\\nNote that in Section 4, caption quality generated by LLaVA1.6 and LLaVA-OV is lower compared to results in other sections, due to using suboptimal prompts. We will update the results in the final manuscript following implementation details in Appendix A1.\\n\\n\\n### Q6. Cycle consistency on long captions. \\nAs suggested, we have added Section 4.4 analyzing the effect of caption length on cycle consistency. We summarize captions from the DCI dataset into varying lengths (5, 10, 20, 30, and 50 words) using LLaMA3-8B-Instruct. We stop at 50 tokens to not overflow the token limit for text-to-image models. Updated Figure 12 shows that cycle consistency improves as captions become more descriptive and dense, especially for the higher performing models FLUX-Time and SD3.\\n\\n[1] Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. Li et al., PMLR 2023. \\n\\n[2] Visual instruction tuning. Liu et al., NeurIPS 2023.\\n\\n[3] Improved baselines with visual instruction tuning. Liu et al., 2023.\\n\\n[4] Internvl-2.0. OpenGVLab Team, 2024. https://internvl.github.io/blog/2024-07-02-InternVL-2.0/.\\n \\n[5] Llava-onevision: Easy visual task transfer. Li et al., 2024.\\n\\n[6] A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, Urbanek et al., CVPR 2024.\"}",
"{\"summary\": \"This paper analyzes the cycle consistency of current image/text generative models, which represents how well the original input is preserved when it consecutively passes through two generative models. To quantify the cycle consistency of images and text, the authors use metrics that measure perceptual similarity and present results for various combinations of image and text generative models. Using several benchmarks, the authors suggest that cycle consistency alone can imply the performance of models on downstream tasks by showing a high correlation between the two, thereby eliminating the need for creating additional test sets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents various quantitative analysis results on the proposed claim while also visualizing various qualitative results.\", \"The authors analyze a less explored aspect of generative models and provide insights into its significance.\"], \"weaknesses\": [\"There is little analysis on the differences between the models used to measure cycle consistency. The paper simply mentions that recent models perform better, without analyzing whether the differences stem from their training data, objective functions, specific architectures, etc. Authors could have provided a table summarizing these differences and discussed how these factors may contribute to the observed performance differences in cycle consistency.\", \"In sections 4 and 5, it is unclear what message the authors are trying to convey. It is ambiguous how these sections relate to the cycle consistency discussed in sections 2 and 3. Authors could have better linked these sections to the overall narrative, such as explicitly stating how the divergence in text-to-image mappings (Section 4) and sensitivity in image-to-text mappings (Section 5) impact or relate to cycle consistency.\"], \"questions\": [\"When calculating cycle consistency for each modality, one of two generative models is fixed. (SDXL Turbo for image cycle consistency / LLaVA 1.5-13b for text cycle consistency) Would results show the same trend if the fixed models were changed?\", \"If richer and more detailed data improves cycle consistency, can we say that recent models show better performance because they use quality data? It could lead to valuable insights if authors examined the training data characteristics of the better-performing models to see if there's a correlation with data quality, and discussed how this relates to cycle consistency performance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Individual response to go1x\", \"comment\": \"Thank you for the insightful review and the helpful feedback.\\n\\n\\n### Q1. What factors affect cycle consistency?\\nAs suggested by Reviewers go1x, QcCd, and 31TL, we have included an analysis of factors contributing to cycle consistency in the updated Section 3. The main findings are highlighted as follows:\\n1. **Cycle consistency improves with LLM scale.** An image-to-text model consists of a vision encoder, a projector, and a large language model (LLM). Scaling the vision transformer (ViT) for the vision encoder is reported to enhance performance [1], yet a simple MLP projection remains the dominant approach [2-5]. Since no model provides open-sourced weights with varying vision encoder scales, we focus our analysis on ablating the scale of the LLM. Figure 4 demonstrates that scaling the LLM enhances image and text cycle consistency across all image-to-text model families. Figure 5 visualizes this effect\\u2014despite being trained on the same dataset and architecture, only InternVL2-40B successfully captures both the color and the presence of a corner turret.\\n2. **Cycle consistency improves with re-captioned dataset quality**. Table 1 demonstrates that the quality of the re-captioned dataset (e.g., dataset re-captioned by GPT-4V, LLaVA1.6-34B) plays an important role in improving image cycle consistency, often outperforming models trained on larger datasets annotated by less-performant models (e.g., BLIP). On the other hand, text cycle consistency shows little difference between the LLaVA models, as the input text from sDCI often lacks fine-grained detail (evidenced in Figure 16) compared to longer and more descriptive synthetic captions, such as those produced by LLaVA1.6 and LLaVA-OV. We believe higher-quality human annotations and text-to-image models with longer context would enhance the analysis of text cycle consistency. We exclude InternVL2 from this analysis as its pre-training dataset details are not disclosed.\\nWe detail differences in architecture, scale, and dataset in Tables 1, 5, 6.\\n\\n\\n### Q2. Why specific combinations of I2T and T2I models perform differently in image and text cycle consistency.\\nMost of the model combinations perform similarly on our updated results using the Densely Captioned Images dataset. However, there are exceptions: LLaVA1.5-13B achieves higher **text** cycle consistency than image cycle consistency relative to the other VLMs, whereas LLaVA-OV-7B has a higher consistency on **image** over text, as shown in Figure 14,15 (Appendix). We think this may be due to several reasons: LLaVA1.5 captions tend to point out less specific details, whereas other models try to pinpoint exact locations or meanings of photographs. Of course this is desirable for describing real images, but leads to variability when reconstructing text from synthetic images.. Secondly, LLaVA1.5 uses Vicuna-1.5 as its LLM which is a finetuned version of Llama 2. The sDCI captions which we use as our text inputs, are summaries of the full captions from DCI created by Llama2. Because the input text and output texts are created by similar models, it is likely that this contributes to their high alignment scores.\\n\\n\\n### Q3. The paper does not propose solutions for prompt sensitivity. \\nWe clarify that we are **not proposing a new method**, , but aim to provide an **empirical study on cycle consistency in image-text mappings**. \\n\\nAs suggested by Reviewers QcCd and uoP89, we extend our analysis to study how **random seed selection, prompt and caption style, and temperature sampling** contribute to variance in cycle consistency (updated Section 5). Table 2 shows that image-to-text models exhibit higher variance due to temperature sampling but remain relatively robust to changes in prompt style. In contrast, text-to-image models are significantly more sensitive to caption style than to random seed sampling. Note that we excluded InternVL2-40B due to lack of compute, and we will add it to the final manuscript.\\n\\n\\n### Q4. MS COCO captions often lack detailed descriptions of the images.\\nWe agree with the reviewer\\u2019s suggestion and have replaced MS COCO with Densely Captioned Images (DCI) dataset [6], which features **higher-resolution** images annotated with more **dense captions**. Due to the limited prompt length of text-to-image models, we use sDCI, i.e., LLM-summarized DCI captions to fit 77 tokens, and sample 1K from the train split. Average image resolution and number of CLIP tokens per caption are as follows:\\n\\n| Dataset | Resolution | Tokens/Cap |\\n|-----------|-----------|-----------|\\n| sDCI | 1500\\u00d72250 pixels | 49.21 |\\n| COCO | 480\\u00d7640 pixels | 13.54 |\\n\\nImproving dataset quality revealed key factors influencing cycle consistency (updated Section 3.2). We thank the reviewer for the insightful suggestion.\"}",
"{\"title\": \"Individual response to oP89 (3)\", \"comment\": \"### Q1. Benefits of cycle consistency.\", \"we_highlight_the_benefit_of_cycle_consistency_in_section_4\": [\"cycle-consistent mappings **strongly correlate** with **improved descriptiveness and reduced hallucination in generated text, and better prompt-following in images**, i.e., several desired properties when building a proficient multimodal model. Because cycle consistency aligns well with performance, it can be used as a **self-supervised** proxy for such performance measures. Furthermore, cycle consistency allows us insights into what kinds of texts and images are easily exchangeable, and what kinds of data are harder to translate.\", \"### Q2. Change in manuscript.\", \"We have mainly updated the manuscript to better communicate the results **requested by the reviewers**. However, we emphasize that the core topic, main results, and conclusions of the paper remain unchanged:\", \"Figures 6 and 7: Trends remain the same, but we average across **all model combinations** and plot against both cycles to address concerns from Reviewers QcCd and BHgf.\", \"Table 2: We plot against **cycle consistency** rather than diversity to address concerns from Reviewers QcCd and oP89.\", \"Figure 12: Addresses concerns from Reviewers 31TL and oP89.\", \"Table 1, Figure 4: Addresses concerns from Reviewers 31TL, go1x, and QcCd.\", \"We also improved plot design and added qualitative visualizations to **enhance the quality** of the manuscript.\", \"We found the questions raised by the reviewers to be highly meaningful, accompanied by its results, which led to a **reorganization** of the sections. We are sincerely grateful for these insightful suggestions, which significantly enhanced the depth and quality of our analysis.\"]}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"### Q1. Change in manuscript.\\nWe have mainly updated the manuscript to better communicate the results **requested by the reviewers**. However, we emphasize that the core topic, main results, and conclusions of the paper remain unchanged:\\n* Figures 6 and 7: Trends remain the same, but we average across **all model combinations** and plot against both cycles to address concerns from Reviewers QcCd and BHgf.\\n* Table 2: We plot against **cycle consistency** rather than diversity to address concerns from Reviewers QcCd and oP89.\\n* Figure 12: Addresses concerns from Reviewers 31TL and oP89.\\n* Table 1, Figure 4: Addresses concerns from Reviewers 31TL, go1x, and QcCd.\\n* We also improved plot design and added qualitative visualizations to **enhance the quality** of the manuscript.\\n\\nWe found the questions raised by the reviewers to be highly meaningful, accompanied by its results, which led to a **reorganization** of the sections. We are sincerely grateful for these insightful suggestions, which significantly enhanced the depth and quality of our analysis.\\n\\n### Q2. Contributions.\\nWe clarify that the point of our paper is **not** to claim that training for cycle consistency translates to better model performance (although we will cite the prior work that shows this). Cycle consistency can offer practical value in the following ways: Given models that were **not explicitly trained to be cycle-consistent**, observing cycle consistency **at test time** strongly correlates with improved descriptiveness and reduced hallucination in generated text, and better prompt-following in images, i.e., several desired properties for a high-quality multimodal mapping. Therefore, it can be used as a **self-supervised** proxy for such performance measures.\"}",
"{\"title\": \"Individual response to QcCd (2)\", \"comment\": \"### Q3. Model ablation for measuring cycle consistency.\\nAs suggested, we update Figure 6, 7, and 10 to report cycle consistency **averaged across all models**, instead of just fixing one model in the pipeline. We also extend the analysis to **include all four combinations**, additionally comparing text quality (descriptiveness, hallucination) and image quality (prompt-following) with both image and text cycle consistency. We observe that both cycles exhibit a **strong correlation** across modalities, with text cycle consistency being more prominent.\\n\\nAs requested, we report the Pearson correlation coefficient **per model**. The **correlation is consistently strong** for most models ($R^2 > 0.65$), except for BLIP2-2.7B and LLaVA-OV-0.5B with lower coefficients of 0.349 and 0.241, respectively. We attribute the low correlation to their use of small-scale, less-performant language models (OPT-2.7B, Qwen2-0.5B) as pre-trained backbones, which may cause poorer text reconstruction.\\n\\n| Fixed I2T Model | Text Cycle Consistency vs T2I Model Performance ($R^2$) | \\n|-----------|-----------|\\n| BLIP2-2.7B | 0.349 |\\n| BLIP2-6.7B | 0.657 |\\n| BLIP2-T5-xxl | 0.871 |\\n| LLaVA1.5-7B | 0.966 |\\n| LLaVA1.5-13B | 0.964 |\\n| LLaVA-OV-0.5B | 0.201 |\\n| LLaVA-OV-7B | 0.910 |\\n| LLaVA1.6-7B | 0.963 |\\n| LLaVA1.6-34B | 0.952 |\\n| InternVL2-2B | 0.935 |\\n| InternVL2-8B | 0.904 |\\n| InternVL2-26B | 0.879 |\\n| InternVL2-40B | 0.913 |\\n| Average | 0.950 |\\n\\n|Fixed I2T Model| Image Cycle Consistency vs T2I Model Performance ($R^2$) | \\n|-----------|-----------|\\n| BLIP2-2.7B | 0.836 |\\n| BLIP2-6.7B | 0.834 |\\n| BLIP2-T5-xxl | 0.879 |\\n| LLaVA1.5-7B | 0.915 |\\n| LLaVA1.5-13B | 0.916 |\\n| LLaVA-OV-0.5B | 0.932 |\\n| LLaVA-OV-7B | 0.954 |\\n| LLaVA1.6-7B | 0.942 |\\n| LLaVA1.6-34B | 0.953 |\\n| InternVL2-2B | 0.904 |\\n| InternVL2-8B | 0.903 |\\n| InternVL2-26B | 0.880 |\\n| InternVL2-40B | 0.902 |\\n| All Models | 0.924 |\\n\\n|Fixed T2I Model| Text Cycle Consistency vs I2T Model Performance ($R^2$) | \\n|-----------|-----------|\\n| SD1.5 | 0.875 |\\n| SDXL-Turbo | 0.845 |\\n| SDXL | 0.879 |\\n| SD3 | 0.870 |\\n| FLUX Time | 0.861 |\\n| All Models | 0.864 |\\n\\n|Fixed T2I Model| Image Cycle Consistency vs I2T Model Performance ($R^2$) | \\n|-----------|-----------|\\n| SD1.5 | 0.741 |\\n| SDXL-Turbo | 0.759 |\\n| SDXL | 0.731 |\\n| SD3 | 0.790 |\\n| FLUX Time | 0.794 |\\n| All Models | 0.766 |\\n\\n\\n### Q4. Does cycle consistency correlate with data quality? \\nYes! Thanks to your valuable suggestions, we have discovered that the quality of the **re-captioned dataset** is associated withcycle consistency (mentioned in **Q1**). Table 1 details the re-captioned dataset for each model and their cycle consistency. \\n\\n\\n[1] Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. Li et al., PMLR 2023. \\n\\n[2] Visual instruction tuning. Liu et al., NeurIPS 2023.\\n\\n[3] Improved baselines with visual instruction tuning. Liu et al., 2023.\\n\\n[4] Internvl-2.0. OpenGVLab Team, 2024. https://internvl.github.io/blog/2024-07-02-InternVL-2.0/ \\n\\n[5] Llava-onevision: Easy visual task transfer. Li et al., 2024.\\n\\n[6] A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, Urbanek et al., CVPR 2024.\"}",
"{\"title\": \"Individual response to 31TL (2)\", \"comment\": \"### Q3. The claim that \\\"more descriptive text leads to a broader distribution of generated images\\\" is not convincing.\\nWe agree that the existing experiment does not properly control caption length \\u2013 instead it varies the number of elements described. To address this issue, we make the following changes:\\n1. We replace Section 4 with a new experiment that studies **cycle consistency as a function of caption density** (see updated Section 4.4). We better control the amount of information by summarizing captions from the DCI dataset [10] into varying lengths (5, 10, 20, 30, and 50 words) using LLaMA3-8B-Instruct. We find that cycle consistency improves as captions become more descriptive and dense, especially for the higher performing models FLUX-Time and SD3. \\n\\n2. Congruent with the tags experiment, we report **diversity as a function of caption density**, measured by DreamSim pairwise distance between generated images using 10 different random seeds for the same caption. However, we observe inconsistent trends across the text-to-image models (see updated Figure 19). This discrepancy likely stems from differences in the experiment design: previous Section 4 used hierarchically created captions with tags, often altering meaning by introducing new elements, while the caption density experiment used summarized versions of the same caption with varying levels of detail. By focusing on caption density rather than tags, we aim to better understand how caption descriptiveness influences cycle consistency.\\n\\n\\n### Q4. Unaddressed claims in the abstract.\", \"thank_you_for_pointing_these_claims_out_and_we_intend_to_clarify_them_both_here_and_in_the_updated_abstract\": \"1. **Analyze failure cases**: We provide examples of failure cases of cycle consistency in Figures 23 and 24. Failures include: synthetic images with artifacts or implausible generations but little effect on captions, descriptions of non-existent objects, endpoint model failures (i.e., the intermediate image or text representation is reasonable but the endpoint model creates inaccuracies which affect reconstruction). Many of these mistakes can be attributed to model error and usually affect text cycle consistency much more than image, mainly because images generated from incorrect captions often have lower cycle consistency, whereas image-to-text models do not always notice inaccuracies in synthetic images.\\nThere are also examples, typically for image cycle consistency, where information is not explicitly conveyed by the intermediate text, but the image reconstruction is nearly successful. To explain this, we can attribute cycle consistency as both a function of the intermediate representation **and the input**. For example, for image cycle consistency we find it is easy to reconstruct common or cliche images in very short captions. We show examples in Figure 9.\\n2. **How descriptiveness affects achieving cycle consistency**: We modify Section 4.4 to study caption density and cycle consistency, and find that more descriptive text positively influences cycle consistency as detailed in Q3.\\n3. **There are no explorations of training cycle-consistent models in the paper**: Our intent for Section 5 was to study variance of cycle consistency, as both image-to-text and text-to-image mappings can be stochastic due to random seed choice, temperature sampling, and prompt wording. We apologize for causing confusion and we have updated the abstract for clarity.\\n\\n### Q5. Is image-text cycle consistency a meaningful metric for model development?\\nYes, we believe that image-text cycle consistency is a meaningful metric for model development. In the update paper, we show results indicating that more informative and descriptive captions correlate with cycle consistency, and similarly for generated images which are informative and faithful to their input text prompts. Qualitatively we highlight examples where better cycle consistency aligns with more preservation of information in Figures 2, 3, and 8. We also provide evidence of existing models that incorporate cycle consistency in training [6-9] in **Q2**.\\n\\n[1] Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. PMLR 2023. \\n\\n[2] Visual instruction tuning. NeurIPS 2023.\\n\\n[3] Improved baselines with visual instruction tuning. 2023.\\n\\n[4] Internvl-2.0. OpenGVLab, 2024. https://internvl.github.io/blog/2024-07-02-InternVL-2.0/ \\n\\n[5] Llava-onevision: Easy visual task transfer. 2024.\\n\\n[6] Improving image generation with better captions. 2023.\\n\\n[7] Scaling rectified flow transformers for high-resolution image synthesis. ICML 2024.\\n\\n[8] Synth2: Boosting visual-language models with synthetic captions and image embeddings. Sharifzadeh et al., 2024.\\n\\n[9] Leveraging unpaired data for vision-language generative models via cycle consistency. Li et al., ICLR 2024.\\n\\n[10] A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, CVPR 2024.\"}",
"{\"summary\": \"The paper analyzes the cycle consistency of image-to-text and text-to-image models. The study shows that while current models exhibit a level of cycle consistency, there is room for improvement, especially T2I models are sensitive to slight changes in prompts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper focuses on interesting topics about the cycle consistency of by analyzing the cycle consistency of T2I and I2T models.\\n2. It provides a comprehensive analysis of cycle consistency in image-to-text and text-to-image mappings, highlighting the correlation between cycle consistency and downstream performance in tasks such as image captioning and text-to-image generation.\", \"weaknesses\": \"1. Although the paper presents that T2I models are more sensitive to small changes in input prompts, it lacks an in-depth analysis of why different combinations of T2I and I2T models yield varying performance. For example, how does the training dataset affect the cycle consistency? How does the pre-trained model in T2I or I2T affect the cycle consistency?\\n2. The paper does not sufficiently analyze why specific combinations of I2T and T2I models perform differently in terms of image and text cycle consistency. For example, BLIP2 underperforms compared to LLaVA1.6 in image cycle consistency while surpassing it in text cycle consistency.\\n3. The analysis in the paper highlights that text-to-image models are highly sensitive to slight changes in prompt structure (word choice, order, and length), which can lead to inconsistencies. However, the paper stops short of proposing concrete solutions or mitigation strategies for this issue.\\n4. The evaluation conducted solely on 1k MS COCO data is limited, especially since MS COCO captions often lack detailed descriptions of the images\", \"questions\": \"1. Recent research shows the hallucination problems in Multimodal LLM and compositional problems in T2I. How can the proposed method avoid this issue? For example, an input prompt could result in the generation of an incorrect image, which might then lead to an MLLM producing captions that are incorrect but resemble the original prompt. In this case, the cycle consistency might be high, but the actual performance should be low.\\n2. What is the cycle consistency on long captions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"In this work, the authors explored about the multi-modal cycle consistency of current text-to-image(T2I) and image-to-text(I2T) generation models.\\n\\nThey paired various off-the-shelf T2I and I2T models to build a cycle model and measured the input-output difference. They found that current state-of-the-art models possess a certain level of perceptual cycle consistency, even when they're not explicitly trained with cycle consistency objectives.\\nThen, they argued that as the performance of the individual T2I/I2T module increases, the cycle consistency improves.\\n\\nTo further analyze and find possible factors that can affect to achievement of cycle consistency, the authors suggested the concept of 'divergence' in T2I mappings. And they claimed that more detailed and informed text prompts showed more divergent output space, yet improved cycle consistency.\\nFinally, the authors demonstrated that a slight perturbation of text input sometimes results in higher variation in the T2I model output, which could be a challenge to achieve better cycle consistency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"-Considering the current generative models become more challenging to inject cycle consistency because of their iterative sampling process, their behavior on cycle consistency is an interesting question.\\n\\n-The script is well-written and clearly presents its claim.\", \"weaknesses\": \"-Some experiments are not well-designed, which makes the corresponding findings seem to lack contributions or doubtful.\\n\\n1. Section 3 stated to demonstrate \\\"more cycle-consistent models have better T2I/I2T downstream performance\\\", but its content only shows that \\\"better T2I/I2T models are more cycle-consistent\\\", which are not the same.\\nIt seems too natural that combining better T2I&I2T models improves cycle-consistency of the pair, since they provide high-quality data that contains major information of the input. On the other hand, it's still questionable that satisfying cycle consistency guarantees better T2I&I2T performance.\\n(e.g. A perfect Image->Text->Image reconstruction can be achieved if the I2T model writes down all pixel values in one long string. A perfect Text->Image->Text reconstruction can be achieved if the T2I model produces the image that contains the entire input text visually.)\\n\\n2. In Figure 6, synthesized input captions with fewer tags don't seem to actually contain less information. In the first row, the input caption for 1 Tag is very long and specific, more detailed than 2~5 Tag captions. In the second row, the 1 Tag caption already contains the info of the second tag \\\"reflects\\\". This could be the reason that the divergence decreased with fewer tags, since better cycle consistency (more tags) coming with more divergence seems counter-intuitive.\", \"questions\": \"-In Table 1\\\\~2, the presented values alone are not enough to tell if each I2T+T2I model pair has a good cycle consistency since there's no baseline performance or threshold was suggested. Although the authors showed several cases in Figure 2\\\\~3, could the authors provide any kind of baseline scenario for comparison?\\n\\n-Since the sampling process image-to-text models can be also stochastic, could you also provide the analysis on the divergence of I2T models?\\n\\n-What does the analysis of the divergence and sensitivity of I2T models suggest for creating more cycle-consistent models? It would be better if there's a clearer statement about how the results on divergence and sensitivity imply about cycle consistency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I'd like to thank the authors for their responses, and I appreciate that they resolved many of my questions.\\n\\nHowever, I still have several concerns:\\n\\n1. If the authors do not seek cycle consistency to accomplish better T2I/I2T mapping, then why should future researchers take cycle consistency into account? Most of the previous works that involved cycle consistency[1][2] used it as a means for better mapping (between text/image or two different image domains, etc.). If this is not what the authors want to show, what would be the benefit of good cycle consistency? Are there scenarios such that cycle consistency is beneficial as itself? \\n\\n2. I'm afraid that the manuscript has deviated too far from the first draft; almost every figures and table were re-drawn, experiments in Sections 4 and 5 were done with different control factors and different experiment designs, and some of them came up with different conclusions from the original manuscript. Although I agree that these new contents strengthen the author's idea, I'm concerned that this drastic revision might be against the purpose of the original submission deadline.\\n---\\n[1] Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, ICCV 2017\\n\\n[2] Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency, ICLR 2024\"}",
"{\"summary\": \"This paper presents several intriguing phenomena regarding the cycle consistency of image-text mappings with text-to-image models and image-to-text ones. It demonstrates (1) that more advanced models achieve better cycle consistency; (2) a strong correlation between cycle consistency and tasks such as image captioning and text-to-image generation; (3) that the number of objects described by the text affects text-to-image mappings; and (4) that text-to-image models are sensitive to prompt variations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well presented and easy to read.\", \"The paper demonstrates extensive experiments incorporating multiple combinations of T2I and I2T models.\"], \"weaknesses\": [\"Major issues:\", \"Regarding Table 1 and 2, the analysis of why image-to-text models have greater impact than text-to-image models is hand-wavy. There should be more discussion on this.\", \"Causality Direction: While the paper demonstrates correlation between cycle consistency and model performance (captioning/generation), it fails to address the causality direction. The improved cycle consistency is likely a consequence of better model capabilities rather than a contributing factor, which diminishes the practical utility of cycle consistency as a metric.\", \"The claim that \\\"more descriptive text leads to a a broader distribution of generated images\\\" is not convincing. The experiments does not properly control the caption length. Figure 6 shows a case where the 1-tag caption exceeds the 5-tag caption in length.\", \"The abstract makes several claims that aren't supported by the paper's content:\", \"\\\"analyze in what ways they (cycle consistency) fail\\\": there are no such discussions in the paper.\", \"\\\"how it affects achieving cycle consistency\\\": there are no such discussions in the paper.\", \"\\\"we show possible challenges of training cycle consistent models due to the sensitivity of text-to-image models\\\": there are no explorations of training cycle-consistent models in the paper.\"], \"minor_issues\": [\"\\\"more descriptive text leads to a a broader distribution of generated images\\\" has double \\\"a\\\".\", \"On the 4th line in page 4, the sentence \\\"Therefore, examine how a text-to-image mapping can diverge from one fixed text prompt into many different images.\\\" is incomplete.\", \"At the end of page 8, \\\"Table 5\\\" should be \\\"Table 6\\\".\"], \"questions\": [\"Is image-text cycle consistency a meaningful metric for model development? Should improving cycle consistency be a priority for model designers? What are the concrete applications or benefits of enhanced cycle consistency?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Individual response to QcCd\", \"comment\": \"Thank you for the insightful feedback and the helpful suggestions.\\n\\n### Q1. What factors contribute to the observed differences in cycle consistency?\\nAs suggested by Reviewers go1x, QcCd, and 31TL, we have included an analysis of factors contributing to cycle consistency in the updated Section 3. The main findings are highlighted as follows:\\n1. **Cycle consistency improves with LLM scale**. An image-to-text model consists of a vision encoder, a projector, and a large language model (LLM). Scaling the vision transformer (ViT) for the vision encoder is reported to enhance performance [1], yet a simple MLP projection remains the dominant approach [2-5]. Since no model provides open-sourced weights with varying vision encoder scales, we focus our analysis on ablating the scale of the LLM. Figure 4 demonstrates that scaling the LLM enhances image and text cycle consistency across all image-to-text model families. Figure 5 visualizes this effect\\u2014despite being trained on the same dataset and architecture, only InternVL2-40B successfully captures both the color and the presence of a corner turret.\\n\\n2. **Cycle consistency improves with re-captioned dataset quality**. Table 1 demonstrates that the quality of the re-captioned dataset (e.g., dataset re-captioned by GPT-4V, LLaVA1.6-34B) plays an important role in improving image cycle consistency, often outperforming models trained on larger datasets annotated by less-performant models (e.g., BLIP). On the other hand, text cycle consistency shows little difference between the LLaVA models, as the input text from sDCI often lacks fine-grained detail (evidenced in Figure 16) compared to longer and more descriptive synthetic captions, such as those produced by LLaVA1.6 and LLaVA-OV. We believe higher-quality human annotations and text-to-image models with longer context would enhance the analysis of text cycle consistency. We exclude InternVL2 from this analysis as its pre-training dataset details are not disclosed.\", \"we_find_minimal_differences_in_objective_functions_across_models\": \"most image-to-text models use visual instruction tuning with auto-regressive objectives except BLIP2, and most text-to-image models are LDMs, except Stable Diffusion 3 with rectified flow. Therefore, we exclude analysis of the objective function. As suggested by the reviewer, we detail differences in architecture, scale, and dataset in Table 1, 5, and 6.\\n\\n### Q2. It is unclear how Section 4 and 5 relate to cycle consistency. \\nPreviously Section 4 lacked proper control of caption length (mentioned by 31TL, op89) and Section 5 combined changes in length and style during caption rewriting, making it difficult to isolate their respective impacts on cycle consistency. To address these shortcomings and better relate Sections 4 and 5 to cycle consistency, we make the following changes:\\n1. Updated Section 4.4 studies **cycle consistency as a function of caption length**. Captions from the Densely Captioned Images dataset [6] are summarized into varying lengths (5, 10, 20, 30, and 50 words) using LLaMA3-8B-Instruct. Figure 12 shows that cycle consistency improves as captions become more descriptive, especially for the higher performing models FLUX-Time and SD3.\\n2. Updated Section 5 studies the **variance in cycle consistency** (formerly called sensitivity). Unlike the previous experiment, we address the effect of caption length separately in Section 4.4, and focus on sources of variance in this section. Specifically, we analyze how random seed selection, prompt style, and temperature sampling contribute to this variance. Table 2 shows that image-to-text models exhibit higher variance due to temperature sampling but remain relatively robust to changes in prompt style. In contrast, text-to-image models are significantly more sensitive to prompt style than to random seed sampling. Note that we excluded InternVL2-40B from measuring cycle consistency due to lack of compute, and we will add it to the final manuscript.\\n3. Congruent with the tags experiment, updated Figure19 shows **diversity as a function of caption density**, measured by DreamSim pairwise distance between generated images using 10 different random seeds for the same caption. However, we observe inconsistent trends across the text-to-image models. This discrepancy likely stems from differences in the experiment design: previously Section 4 used hierarchically created captions with tags, which altered meaning by introducing new elements. Instead, the caption density experiment uses summarized versions of the same caption with varying levels of detail. By focusing on caption density rather than tags, we aim to better understand how caption descriptiveness influences cycle consistency.\"}",
"{\"summary\": \"The paper (Cycle Consistency of Image-Text Mappings) investigates the degree to which the image-text mappings have a cyclical consistency. Although existing models do not train for this consistency explicitly, for a subset of models this cyclic consistency is enforced. In terms of application, the authors find that the measure of cycle-consistency correlates relatively well with downstream accuracy \\u2014 which can help perform quick tests on the capabilities of the model without requiring a curated benchmark. Overall, I believe that the paper is insightful, but lacks a strong application using those insights except for an approximate performance check for downstream tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is a thorough empirical study on the cycle-consistency of image-text representations across most of the popular T2I and I2I models. The results though not super surprising (as they are often trained / fine-tuned on the similar training sets) are well documented and can be crucial for the community.\", \"The observation regarding the correlation between image-text cyclical consistency and downstream task accuracy can be useful to quickly check the effectiveness of the model. One question regarding this: In Fig. 4, SDXL-Turbo is used as a decoder for the image reconstruction case and LLaVa-1.5-13B for text generation. How does this design choice affect the correlation between cycle consistency and downstream performance? The authors should ideally provide some ablation on this design choice.\"], \"weaknesses\": [\"Weaknesses:\", \"The application of using cycle-consistency as an approximate measure for downstream task accuracy is an interesting use-case; However, I believe they are proxy for only two tasks (Image captioning and T2I generation performance). To be useful in practice, I suggest the authors to add in more tasks concerning these models (e.g., VQA amongst others) and check if cycle-consistency can still be an approximate measure of task accuracy.\", \"I find the Sec.(5) to be intriguing, but the authors should highlight how some of the takeaways be used towards training / fine-tuning models with better downstream capabilities.\", \"The authors select CIDEr score for captioning performance measurement; Have the authors considered using a strong MLLM for measuring captioning performance and using it to measure the correlation with?\", \"(This is not a weakness - but a discussion point) \\u2014 Based on the insights, what do the authors think about building unified image-to-text and text-to-image models while enforcing cyclical consistency? Will it lead to better downstream performance than training these models independently.\"], \"questions\": \"Overall, this paper is a nice empirical study on cyclical consistency of image-text mappings, but I would urge the authors to respond to the Weaknesses during the rebuttal. I am open to improving the score based on the rebuttal discussion. Looking forward to the discussion.\\n\\nSee Weaknesses for additional questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper received ratings of 5, 5, 5, 3, 3, and was unanimously recommended for rejection by all reviewers.\\n\\nThe paper study the cycle consistency of image-text mappings using combinations of text-to-image (T2I) and image-to-text (I2T) models. The authors observe a positive correlation between cycle consistency and certain model attributes such as descriptiveness, prompt-following ability, and reduced hallucination. They further evaluate factors affecting cycle consistency, including language model scale, dataset quality, and prompt style.\\n\\nWhile this work provides an empirical exploration of cycle consistency in multimodal mappings, the primary contribution is based on observations rather than novel methods. It analyzes the correlation between cycle consistency and downstream task performance but does not focus much on methodologies or conclusive insights for improving model performance beyond existing literature.\", \"strengths\": [\"This study tackles an less-explored area, offering a detailed empirical analysis of cycle consistency in multimodal systems.\", \"The analysis spans multiple off-the-shelf models and considers factors like language model scale, dataset quality, and variance in prompts.\", \"The experiments are clearly presented with adequate visualizations and metrics.\"], \"area_for_improvements\": [\"Limited novelty - the paper primarily presents observations without introducing a novel methodology or theoretical contribution. Many conclusions reiterate well-understood dependencies (e.g., high-quality datasets improve model performance).\", \"Although cycle consistency is analyzed, the paper lacks clear evidence or arguments for its practical significance in real-world applications.\", \"Major revisions during the rebuttal phase introduced significant changes to the experiments, conclusions, and structure of the paper. This makes it challenging to assess the original contribution versus the revised content.\", \"Insufficient depth in causal analysis. Thsi paper does not convincingly establish whether cycle consistency is a cause of improved performance or merely a byproduct of better models.\", \"Despite its merits, the submission falls short in terms of methodological novelty, practical utility, and conclusive contributions to the field. It does not meet the bar for acceptance at ICLR. We encourage the authors to address the feedback provided in the reviews and consider submitting a revised version in the future.\"], \"additional_comments_on_reviewer_discussion\": \"The authors provided extensive revisions, including new experiments analyzing caption density, variance sources in cycle consistency, and better aligning the findings with cycle consistency metrics. The author further clarified that the paper is an empirical study, not proposing new methods, and positioned cycle consistency as a self-supervised heuristic for understanding model behavior and dataset quality. Reviewers appreciated the improved clarity and responsiveness but remained concerned about the substantial post-submission changes, which significantly altered the original paper.\\n\\nWhile the revisions addressed many technical concerns, reviewers found the contributions incremental and largely observational, with limited practical utility or methodological novelty. The lack of causal evidence linking cycle consistency to downstream performance, coupled with the extensive restructuring during rebuttal. Therefore, a consensus has been reached that the submission does not meet the ICLR bar for acceptance.\"}",
"{\"title\": \"Common Response to Reviewers\", \"comment\": \"We thank the reviewers for their helpful feedback and thoughtful insights. We are pleased that the reviewers found our topic **interesting** (go1x, oP89, QcCd), our analysis **comprehensive and thorough** (go1x, 31TL, BHgf), and **well presented** (31TL, oP89).\\n\\nFirstly, we would like to clarify the goal of our paper is to present an **empirical study of cycle consistency in image-text mappings**. We observe growing cycle consistency across a wide range of image-to-text and text-to-image models, i.e., images and text are becoming increasingly interchangeable in their representations. We analyze 1) what factors are driving this trend, 2) what kinds of images and texts are exchangeable (i.e., cycle-consistent), 3) and sources of variance in cycle consistency.\", \"we_summarize_our_main_updates_to_the_paper_as_follows\": [\"### Paper Reorganization:\", \"**Section 3: Analysis on factors contributing to cycle consistency**.\", \"As suggested by go1x, QcCd, 31TL, we have included an analysis of factors contributing to cycle consistency.\", \"We find increasing cycle consistency with LLM scale and with high-quality dataset re-captioning.\", \"**Section 4: Properties of cycle-consistent texts and images.**\", \"As mentioned by go1x, we include analysis on object hallucination in text (Section 4.3).\", \"As suggested by oP89, 31TL, QcCd, we study cycle consistency as a function of caption length (Section 4.4).\", \"**Section 5: Variance in cycle consistency** (formerly called divergence/sensitivity).\", \"As suggested by QcCD, oP89, we relate Sections 4 and 5 to cycle consistency by analyzing how random seed selection, temperature sampling, and prompt and caption style cause variance in cycle consistency.\", \"Unlike the previous experiment, we address the effect of caption length separately in Section 4.4, and focus on sources of variance in this section.\", \"### Dataset Updates:\", \"As suggested by go1x, we substitute COCO with the **Densely Captioned Images** (DCI) [1] dataset and update cycle consistency results accordingly. The DCI dataset features **higher resolution images** annotated with more **detailed captions** compared to COCO, improving the quality of our analysis.\", \"### Model Updates:\", \"To analyze factors driving cycle consistency, we select image-to-text models with **disclosed** architecture, scale, and dataset details and weights.\", \"[1] A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, Urbanek et al., CVPR 2024.\"]}",
"{\"comment\": \"I appreciate the authors for their efforts in addressing the reviewers' feedback and making substantial changes to the manuscript.\\n\\n1. However, I am concerned the extent of these modifications would be unfair as most experiments, analyses, figures, and tables have been replaced, making it a substantially different paper. \\n\\n2. Despite these changes, I still believe the contribution of this paper is limited. The authors argue that ITIT explicitly trains cycle-consistent models while a few others implicitly encourage cycle consistency by training on synthetic data. While the paper highlights factors influencing cycle consistency between the trained T2I and I2T models, it does not provide evidence that improved cycle consistency translates to better model performance. This gap reduces the practical value of the findings for model development.\"}",
"{\"title\": \"Individual response to BHgf (2)\", \"comment\": \"### Q3. How Section 5 takeaways can be used for model training.\\nIt is an interesting question how to incorporate the results of Section 5 (sensitivity to prompt style) for better model training. Because image-text mappings are one-to-many (or even many-to-many), some variance is expected and even desirable for sampling over different outputs. The degree to which two text prompts are equivalent could be even a model design choice if introduced as a type of augmentation during training, so that different sentences with the same sentiment map to the same images (this is similar to image augmentation for vision models).\\n\\nNote that we have extended our analysis to study how **random seed selection, prompt and caption style, and temperature sampling** contribute to variance in cycle consistency (updated Section 5). Table 2 shows that image-to-text models exhibit higher variance due to temperature sampling but remain relatively robust to changes in prompt style. In contrast, text-to-image models are significantly more sensitive to caption style than to random seed sampling. \\n\\n\\n### Q4. Using a strong MLLM for measuring captioning performance.\\nWe clarify that we use a strong LLM for measuring captioning performance instead of CIDEr score. We use a VQA Benchmark dataset (such as MME, MMStar) and ask each image-to-text model to produce descriptions for the images in the benchmark dataset. Then, we ask an LLM to answer the VQA question based on the output text descriptions (instead of the images, which would be traditional VQA). This measure for captioning performance has several advantages over CIDEr scores. Firstly, CIDEr scores compare caption outputs against human annotated captions, which are often short and lack detail. Secondly, we are able to determine what kind of information are included in different captions, and what aspects image-to-text models are better at describing than others (based on the VQA sub-categories). Figure 7 demonstrates that cycle consistency strongly correlates with descriptiveness in the generated captions. \\n\\n### Q5. What do the authors think about building unified image-to-text and text-to-image models while enforcing cycle consistency? \\nBased on our findings, along with evidence of existing high-performing models which **already** incorporate cycle consistency in training [1-4], we believe that enforcing cycle consistency can be a helpful training objective. For text-to-image models, DALLE-3 [1] and SD3 [2] are trained on descriptive captions generated by an image captioning model. This process can be formalized as $\\\\text{argmin}_G \\\\ L(I, G(F(I)))$ where $I$ is the input image, $F$ is the image-to-text model, and $G$ is the text-to-image model. Training a text-to-image model on synthetic captions is equivalent to enforcing image cycle consistency relative to the fixed image captioner. For vision-language models (VLMs), Synth2 [3] trains a VLM using data from a pre-trained text-to-image model, while ITIT [4] jointly trains image-to-text and text-to-image models to be cycle-consistent. By injecting cycle consistency during training, both Synth2 and ITIT achieve high performance with significantly fewer data examples than state-of-the-art models. We have included this discussion in the introduction (Section 1) to further motivate our analysis of cycle consistency in current models.\\n\\n[1] Improving image generation with better captions. Betker et al., 2023.\\n\\n[2] Scaling rectified flow transformers for high-resolution image synthesis. Esser et al., ICML 2024.\\n\\n[3] Synth2: Boosting visual-language models with synthetic captions and image embeddings. Sharifzadeh et al., 2024.\\n\\n[4] Leveraging unpaired data for vision-language generative models via cycle consistency. Li et al., ICLR 2024.\"}",
"{\"title\": \"Individual response to oP89\", \"comment\": \"Thank you for the insightful feedback and the helpful suggestions.\\n\\n### Q1. It is questionable that satisfying cycle consistency guarantees better performance.\\nThank you for pointing this out and we will clarify this in the paper. We agree that cycle consistency is likely a consequence of better model capabilities \\u2013 updated Section 3 analyzes contributing factors for enhanced cycle consistency. We are **not** suggesting that cycle consistency is necessarily a contributing factor to performance, unless models are known to incorporate cycle consistency during their training (e.g., SD3 and DALLE-3). \\n> On the other hand, it's still questionable that satisfying cycle consistency guarantees better T2I&I2T performance. (e.g. A perfect Image->Text->Image reconstruction can be achieved if the I2T model writes down all pixel values in one long string. A perfect Text->Image->Text reconstruction can be achieved if the T2I model produces the image that contains the entire input text visually.)\\n\\nWhile we agree that there are trivial solutions to the I2T2I and T2I2T cycles, we do not encounter a failure mode to this degree, since it is unlikely that models are trained with these kind of examples (i.e., outputting pixel values in text or generating text descriptions in pixels). Instead, we demonstrate that higher cycle consistency is generally achieved by higher-quality generated text and images, i.e., more descriptive text with less hallucinations and images generated faithfully to the text and containing fine-grained details (see updated Section 4). \\nApart from the rare scenario suggested by the reviewer, we demonstrate in Section 5 that cycle consistency is sensitive to subtle perturbations in language (i.e., prompt style). Additionally, Section 4.2 (Figure 9) shows instances where high image cycle consistency is obtained using short captions lacking details. While such cases are not uncommon for image cycle consistency, we do not observe similar cases in text cycle consistency.\\n\\n### Q2. How divergence and sensitivity analysis relate to cycle consistency. Fewer tags don\\u2019t necessarily contain less information.\\nWe agree that previous Section 4 (divergence) lacks proper control of caption length and information - instead it varies the number of elements described. Furthermore, Section 5 (sensitivity) combined changes in length and style during prompt rewriting, making it difficult to isolate their respective impacts on cycle consistency. To address these shortcomings and better relate Sections 4 and 5 to cycle consistency, we make the following changes:\\n1. Updated Section 4.4 studies **cycle consistency as a function of caption length**. Captions from the Densely Captioned Images dataset [2] are summarized into varying lengths (5, 10, 20, 30, and 50 words) using LLaMA3-8B-Instruct. Figure 12 shows that cycle consistency improves as captions become more descriptive, especially for the higher performing models FLUX-Time and SD3.\\n2. Updated Section 5 studies the **variance in cycle consistency** (formerly called divergence). Unlike the previous experiment, we address the effect of caption length separately in Section 4.4, and focus on sources of variance in this section. Specifically, we analyze how random seed selection, prompt style, and temperature sampling contribute to this variance. Table 2 shows that image-to-text models exhibit higher variance due to temperature sampling but remain relatively robust to changes in prompt style. In contrast, text-to-image models are significantly more sensitive to prompt style than to random seed sampling. \\n3. Congruent with the tags experiment, we report **diversity as a function of caption density**, measured by DreamSim pairwise distance between generated images using 10 different random seeds for the same caption. However, we observe inconsistent trends across the text-to-image models (see updated Figure 19). This discrepancy likely stems from differences in the experiment design: previously Section 4 used hierarchically created captions with tags, often altering meaning by introducing new elements, while the caption density experiment used summarized versions of the same caption with varying levels of detail. By focusing on caption density rather than tags, we aim to better understand how caption descriptiveness influences cycle consistency.\"}",
"{\"title\": \"Individual response to oP89 (2)\", \"comment\": \"### Q3. Baselines for cycle consistency.\", \"we_add_the_following_baselines_and_comparisons_in_figure_16\": \"1. For **image cycle consistency**, we compare against reconstructing an image given **human annotated text**. For this baseline, each image in the DCI dataset is paired with a \\u201cshort caption\\u201d provided by a human annotator. We compare $\\\\text{DreamSim}(I, G(T_\\\\text{human}))$ with $\\\\text{DreamSim}(I, G(F(I))$, comparing human text $T_\\\\text{human}$ against generated text F(I), where $G$ is the image-to-text model and $G$ is the text-to-image model. Figure 16 shows that synthetic text surpasses human text beyond a certain point due to superior descriptiveness, highlighting its effectiveness as a substitute for human text in training large models.\\n2. For **text cycle consistency**, we compare against reconstructing text given a **natural image**. Specifically, we compare $\\\\text{SBERT}(T, F(I))$ with $\\\\text{SBERT}(T, F(G(T))$, where $I$ is the real image paired with the input text $T$. Figure 16 shows that synthetic images achieve better text cycle consistency compared to real images. Figure 17 visualizes text cycle consistency from a real image vs. synthetic image. Compared to real images containing more complex details, synthetic images only generate details described in the input text which occupy larger areas of the generated image. Therefore, such details are easier to reconstruct for the image-to-text model, resulting in better text reconstruction.\\n\\n\\n### Q4. Analysis of the divergence of image-to-text models. \\nAs mentioned in **Q2**, the updated Section 5 studies the variance in cycle consistency (formerly called divergence). Specifically, analysis on how 1) **prompt style** and 2) **temperature sampling** affects variance in cycle consistency relates to variance from **image-to-text models**.\\n\\n[1] Cyclegan, a master of steganography, Chu et al., NIPS \\u201cMachine Deception\\u201d Workshop, 2017.\\n\\n[2] A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, Urbanek et al., CVPR 2024.\"}",
"{\"title\": \"Individual response to 31TL\", \"comment\": \"Thank you for the insightful feedback and the helpful suggestions.\\n\\n### Q1. Analysis is hand-wavy for Table 1 and 2. \\nAs suggested by Reviewers go1x, QcCd, and 31TL, we have included an analysis of factors contributing to cycle consistency in the updated Section 3. The main findings are highlighted as follows:\\n1. **Cycle consistency improves with LLM scale**. An image-to-text model consists of a vision encoder, a projector, and a large language model (LLM). Scaling the vision transformer (ViT) for the vision encoder is reported to enhance performance [1], yet a simple MLP projection remains the dominant approach [2-5]. Since no model provides open-sourced weights with varying vision encoder scales, we focus our analysis on ablating the scale of the LLM. Figure 4 demonstrates that scaling the LLM enhances image and text cycle consistency across all image-to-text model families. Figure 5 visualizes this effect\\u2014despite being trained on the same dataset and architecture, only InternVL2-40B successfully captures both the color and the presence of a corner turret.\\n2. **Cycle consistency improves with re-captioned dataset quality**. Table 1 demonstrates that the quality of the re-captioned dataset (e.g., dataset re-captioned by GPT-4V, LLaVA1.6-34B) plays an important role in improving image cycle consistency, often outperforming models trained on larger datasets annotated by less-performant models (e.g., BLIP). On the other hand, text cycle consistency shows little difference between the LLaVA models, as the input text from sDCI often lacks fine-grained detail (evidenced in Figure 16) compared to longer and more descriptive synthetic captions, such as those produced by LLaVA1.6 and LLaVA-OV. We believe higher-quality human annotations and text-to-image models with longer context would enhance the analysis of text cycle consistency. We exclude InternVL2 from this analysis as its pre-training dataset details are not disclosed.\\nWe detail differences in architecture, scale, and dataset in Table 1, 5, and 6.\\n\\n### Q2. Failure to address causality. \\nWe agree that cycle consistency is likely a consequence of better model capabilities \\u2013 updated Section 3 analyzes contributing factors for enhanced cycle consistency. We are **not suggesting that cycle consistency is a contributing factor to performance**. \\n\\nHowever, this does not diminish the practical utility of cycle consistency as a tool to study model capabilities. Our goal is to show that cycle consistency is an emergent property in image-to-text and text-to-image models, and strongly correlates with various desired properties such as descriptiveness and reduced hallucination in text, and enhanced prompt-following in images (Section 4). Based on our findings, along with evidence of existing high-performing models which **already** incorporate cycle consistency in training [6-9], we believe that enforcing cycle consistency can be a helpful training objective for future research. For instance, text-to-image models such as DALLE-3 [6] and SD3 [7] report training on descriptive captions generated by an image captioning model. This process can be formalized as $\\\\text{argmin}_G \\\\ L(I, G(F(I)))$ where $I$ is the input image, $F$ is the image-to-text model and $G$ is the text-to-image model. Training a text-to-image model on synthetic captions is equivalent to enforcing image cycle consistency relative to the fixed image captioner. For vision-language models (VLMs), Synth2 [8] trains a VLM using data from a pretrained text-to-image model, while ITIT [9] jointly trains image-to-text and text-to-image models to be cycle-consistent. By injecting cycle consistency during training, both Synth2 and ITIT achieve high performance with significantly fewer data examples than state-of-the-art models.\"}",
"{\"title\": \"Individual response to BHgf\", \"comment\": \"Thank you for the insightful feedback and the helpful suggestions.\\n\\n### Q1. How does model choice affect the correlation between cycle consistency and downstream performance? \\nAs requested, we report the Pearson correlation coefficient **per model**. The **correlation is consistently strong** for most models ($R^2 > 0.65$), except for BLIP2-2.7B and LLaVA-OV-0.5B with lower coefficients of 0.349 and 0.241, respectively. We attribute the low correlation to their use of small-scale, less-performant language models (OPT-2.7B, Qwen2-0.5B) as pre-trained backbones, which may cause poorer text reconstruction.\\n\\n| Fixed I2T Model | Text Cycle Consistency vs T2I Model Performance ($R^2$) | \\n|-----------|-----------|\\n| BLIP2-2.7B | 0.349 |\\n| BLIP2-6.7B | 0.657 |\\n| BLIP2-T5-xxl | 0.871 |\\n| LLaVA1.5-7B | 0.966 |\\n| LLaVA1.5-13B | 0.964 |\\n| LLaVA-OV-0.5B | 0.201 |\\n| LLaVA-OV-7B | 0.910 |\\n| LLaVA1.6-7B | 0.963 |\\n| LLaVA1.6-34B | 0.952 |\\n| InternVL2-2B | 0.935 |\\n| InternVL2-8B | 0.904 |\\n| InternVL2-26B | 0.879 |\\n| InternVL2-40B | 0.913 |\\n| Average | 0.950 |\\n\\n|Fixed I2T Model| Image Cycle Consistency vs T2I Model Performance ($R^2$) | \\n|-----------|-----------|\\n| BLIP2-2.7B | 0.836 |\\n| BLIP2-6.7B | 0.834 |\\n| BLIP2-T5-xxl | 0.879 |\\n| LLaVA1.5-7B | 0.915 |\\n| LLaVA1.5-13B | 0.916 |\\n| LLaVA-OV-0.5B | 0.932 |\\n| LLaVA-OV-7B | 0.954 |\\n| LLaVA1.6-7B | 0.942 |\\n| LLaVA1.6-34B | 0.953 |\\n| InternVL2-2B | 0.904 |\\n| InternVL2-8B | 0.903 |\\n| InternVL2-26B | 0.880 |\\n| InternVL2-40B | 0.902 |\\n| All Models | 0.924 |\\n\\n|Fixed T2I Model| Text Cycle Consistency vs I2T Model Performance ($R^2$) | \\n|-----------|-----------|\\n| SD1.5 | 0.875 |\\n| SDXL-Turbo | 0.845 |\\n| SDXL | 0.879 |\\n| SD3 | 0.870 |\\n| FLUX Time | 0.861 |\\n| All Models | 0.864 |\\n\\n|Fixed T2I Model| Image Cycle Consistency vs I2T Model Performance ($R^2$) | \\n|-----------|-----------|\\n| SD1.5 | 0.741 |\\n| SDXL-Turbo | 0.759 |\\n| SDXL | 0.731 |\\n| SD3 | 0.790 |\\n| FLUX Time | 0.794 |\\n| All Models | 0.766 |\\n\\nFurthermore, we have updated Figures 6, 7, 10 results to report cycle consistency **averaged across all models**, instead of just fixing one model in the pipeline. We also extend the analysis to **include all four combinations**, additionally comparing text quality (descriptiveness, hallucination) and image quality (prompt-following) with both image and text cycle consistency. We observe that both cycles exhibit a **strong correlation** across modalities, with text cycle consistency being more prominent.\\n\\n\\n### Q2. Add more tasks (e.g., VQA) and check if cycle consistency can be a proxy of task accuracy.\\nAs requested, we report correlation between cycle consistency and VQA performance. Note that this is an evaluation of **model** performance, whereas VQA without V measures the descriptiveness of **text descriptions**. We evaluate VQA performance on MMBench and MME, with MME divided into perception and cognition categories. Cycle consistency is computed on the sDCI dataset and averaged across five different text-to-image models with 3 different random seeds. The table below reports the Pearson correlation coefficient ($R^2$) between VQA scores and cycle consistency. Image cycle consistency strongly correlates with MMBench and MME (cognition), but weaker with other benchmarks. This is somewhat surprising as MME (perception) shows a strong correlation with cycle consistency in our VQA without V experiment. This discrepancy likely stems from the differences in these tasks: VQA evaluates the **model**\\u2019s ability to answer diverse questions about an image, while VQA without V evaluates whether the **text descriptions** include detailed answers to those questions. We have also added the correlation plots to the Appendix as Figure 18.\\n| Dataset | Image Cycle Consistency | Text Cycle Consistency |\\n|-----------|-----------|-----------|\\n| MMBench | 0.735 | 0.486 |\\n| MME (average) | 0.363 | 0.553 |\\n| MME (perception) | 0.429 | 0.532 |\\n| MME (cognition) | 0.837 | 0.417 |\"}"
]
} |
1Q2t6D4dK6 | Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image-Quality Metrics | [
"Aleksandr Gushchin",
"Khaled Abud",
"Georgii Bychkov",
"Ekaterina Shumitskaya",
"Anna Chistyakova",
"Sergey Lavrushkin",
"Bader Rasheed",
"Kirill Malyshev",
"Dmitriy S. Vatolin",
"Anastasia Antsiferova"
] | Most modern image-quality-assessment (IQA) metrics are based on neural networks, which makes the adversarial robustness of these metrics a critical concern. This paper presents the first comprehensive study of IQA defense mechanisms in response to adversarial attacks on these metrics. We systematically evaluated 29 defense strategies - including adversarial purification, adversarial training, and certified robustness - and applied 14 adversarial attack algorithms in both adaptive and nonadaptive settings to compare these defenses on nine no-reference IQA metrics. Our analysis of the differences between defenses and their applicability to IQA metrics recognizes that a defense technique should preserve IQA scores and image quality. Our proposed benchmark aims to guide the development of IQA defense methods and can evaluate new methods; the latest results are at link hidden for blind review. | [
"adversarial defenses",
"image quality assessment",
"adversarial attacks",
"image quality metrics",
"benchmark"
] | Reject | https://openreview.net/pdf?id=1Q2t6D4dK6 | https://openreview.net/forum?id=1Q2t6D4dK6 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wTu33KEEAM",
"uwyzxmyo7C",
"tIZV1GeGJE",
"t8ybe0dTQb",
"sblMoKfWKw",
"r2MTBkKprL",
"qfSyYNHyTb",
"l6c6nL0wTj",
"iPBGaMqZWX",
"fiDPswmMDx",
"fdVcaKwcZA",
"fb6I9bCrwb",
"ca2fRvm4GH",
"NsiGTxWheI",
"CQsyY4MlE1",
"8bjOSjf9Dm",
"6opiMg7pOL",
"6eBBuZTO6n",
"2GAc8bXj72",
"1xH0j4ZKCr"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review"
],
"note_created": [
1733046015308,
1731939618342,
1730428521326,
1732606197692,
1731939704841,
1730349356118,
1732510515781,
1731939534821,
1731939846238,
1730620326522,
1732484838300,
1732732974623,
1730614756103,
1733047836451,
1737523541639,
1732573733081,
1732870452723,
1733109066898,
1732733089216,
1734345583869
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_sRtC"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_RxLB"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_vZ76"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_sRtC"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_RxLB"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_RZ8e"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_RZ8e"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_vZ76"
],
[
"ICLR.cc/2025/Conference/Submission2926/Reviewer_sRtC"
],
[
"ICLR.cc/2025/Conference/Submission2926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2926/Area_Chair_yTzj"
]
],
"structured_content_str": [
"{\"title\": \"Additional clarification\", \"comment\": \"__Regarding point 1__: We respectfully disagree with the reviewer\\u2019s interpretation. In our formulation, the attack is designed to meet two key criteria: (1) maximize the predicted IQA score, and (2) ensure that perturbations remain imperceptible to human observers. Trivial unbounded attacks fail to meet the second criterion, as they generate visible distortions, making them impractical for real-world applications where imperceptibility is essential. To ensure imperceptibility, we employ a parameter tuning methodology, detailed in Figure 2. To ensure that attacks are aiming to increase IQA scores, we made some minor modifications to the objective function. For example:\\n* Original objective function for Zhang et al. attack: $L = - D(x_{clean}, x_{adv}) + \\\\lambda * (f(x_{adv}) - f(x_{clean}))^2 $\\n* Our slightly modified objective function: $L = - D(x_{clean}, x_{adv}) + \\\\lambda * f(x_{adv}) $\\n\\nwhere $D$ \\u2014 is a proxy full-reference IQA metric (e.g., LPIPS, DISTS, or SSIM in our experiments), and $f(\\\\cdot)$ is the NR-IQA model being attacked. The original function maximized the absolute difference between the IQA scores of clean and adversarial images, while our modification focuses directly on increasing $f(x_{adv})$, balancing this increase with maintaining imperceptibility.\\nOur approach addresses the reviewer\\u2019s concern about trivial unbounded attacks, ensuring their practical relevance while highlighting the vulnerability of NR-IQA models to adversarial attacks.\\n\\n__Regarding point 2__: Yes, this is true; adding noise to an image can impact its quality. However, we can view the process of adding noise followed by a denoising step as a quality-preserving transformation in relation to image quality metrics. Importantly, all theoretical guarantees that apply to classification also hold for quality metrics since these guarantees are based solely on the noise model and the score aggregation process. We do not alter these components of the algorithm(for the RS, DRS, DDRS, and DP methods, we discretize the regression metric values into N classes before applying the methods; for the MS and DMS methods, we apply them directly without any modifications). The main issue is that the performance of these models, in terms of accuracy of quality scores or correlation with subjective scores, may degrade more than in classification or detection tasks. Nevertheless, our experimental results demonstrate that the decline in accuracy is relatively minor and comparable to that observed with other defense mechanisms. Therefore, we believe that certified defenses based on smoothing represent a promising direction for image quality assessment.\\n\\nWe hope this addresses your concerns, and we remain open to additional feedback\"}",
"{\"comment\": \"Thank you for your valuable suggestions and thoughtful feedback. We address your concerns below:\\n### Weaknesses\\n1. Our primary goal was to create a benchmark that highlights the need for IQA-specific adversarial methods. We agree that methods tailored specifically for IQA would be valuable. While our current focus is on establishing this benchmark, we plan to pursue developing IQA-motivated attacks and defenses in future work. We also want to highlight that we did evaluate 4 IQA-specific attacks, 1 defense (FCN filter) and 1 defense in progress (Gradient Norm Regularization)\\n2. We refer to the figures and tables located in the Appendix from the main text. for example: (lines 140, 205, 275, etc.). We agree that the Appendix needs to be easier to follow, so we added a brief summary of the appendix structure in section A0 (lines 760-780 in the revised paper), highlighting how each section connects to specific parts of the main text. We hope this makes it easier to navigate the results and align them with our main findings.\\n3. Thank you for your suggestion. Our motivation for using PSNR and SSIM was based on the simple structure of these metrics. LPIPS itself is a NN-based metric, and is known to be vulnerable to adversarial attacks ([1],[2]). We assume that LPIPS can be used as a perceptual metric in our study, however, it requires a robustness analysis against transferable attacks on NR IQA models. This step is essential to ensure its reliability and consistency in the context of adversarial evaluation. We also have included additional metrics such as L_inf and L_2 alongside PSNR and SSIM to better capture image quality, particularly in terms of structural and perceptual features.\\n4. Thank you for the suggestion. In the revised version, we have reorganized Section 3.1 by dividing the paragraphs according to defense types to enhance readability.\\n### Questions\\n1. We apologize for typo. The correct term is \\\"spatial complexity,\\\" and we have made the necessary corrections in the updated version of the paper (line 204). Section A3 contains a correct description.\\n2. Thank you for raising this point. To address this, we will conduct additional experiments to evaluate the stability of our results across multiple, randomly sampled sets of 10 images. This analysis will help to assess the consistency of our findings and determine if the results remain robust across different image selections. We will report these additional results in the updated version of the paper in a few days.\\n3. Our benchmark and leaderboard will be published online, where we will openly accept submissions of new defense methods. As mentioned in Section 3.6, we have implemented an automated pipeline to compute results for all new submissions efficiently. To ensure consistency, we require submissions to adhere to a specified PyTorch interface. Regarding submission for new attack methods, they can also be submitted and we will design a separate leaderboard with a comparison of these attacks in the future. \\n\\n[1] https://arxiv.org/pdf/1906.03973\\n\\n[2] https://arxiv.org/pdf/2307.15157\"}",
"{\"summary\": \"This paper systematically evaluates the effectiveness of various defense strategies for NR-IQA models, including adversarial purification, adversarial training, and certified robustness. It also examines these defenses under both adaptive and non-adaptive attack scenarios. Overall, the experiments in this paper are thorough and comprehensive, but the paper's readability could still be enhanced.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The research topic of this paper is interesting and promising.\\n2. The experiments in this paper are comprehensive and detailed.\", \"weaknesses\": \"1. Most of the strategies are directly borrowed from classification tasks, with no new approaches tailored to the specific characteristics of IQA tasks.\\n2. In addition to PSNR and SSIM, incorporating additional quantitative indicators like L_2 and L_\\\\infty would provide a more intuitive understanding of the differences between the original image and the purified image.\\n3. The article is less readable, and many tables contain abbreviations that are not defined.\", \"questions\": \"1. The definition of attack in Equation 1 focuses solely on increasing the model's score. However, if an image already possesses the highest quality, what is the purpose of the attack on it? Why was the idea of decreasing the score of high-quality images/videos not considered?\\n2. Currently, the attack methods employed are those typically used in classification problems. It would be beneficial to consider incorporating some of the attack strategies for IQA that have been proposed in recent years.\\n3. Table 3 shows that many adversarial purification defenses exhibit strong defensive effects. However, these methods should be analyzed more thoroughly. For instance, purification techniques that modify the entire image, such as color quantization and median blur, should include more detailed indicators (L_2 and L_\\\\infty) to better reflect the extent of image modification.\\n4. Most of the analysis in this paper primarily describes the data presented in the tables. It would be beneficial to include an in-depth analysis of the characteristics and connections among these various types of defense strategies, providing great guidance for future research.\\n5. The abbreviations in the table should be added with full spelling in the caption to help readers understand and prevent misunderstandings. \\n6. Equation 2 misses the variable x\\u2019.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for the authors' reply and their efforts for conducting new experiments. The reply to Weakness 1 and 3 is fine and addresses my concerns. But the reply to Weakness 2 only partially addresses the concern. More in-depth analysis about the effectiveness of different defense methods are expected.\"}",
"{\"comment\": \"We sincerely thank the reviewers for their thorough evaluation and constructive feedback, which have helped us improve the quality and clarity of our manuscript. We address your concerns below:\\n### Weaknesses\\n1. Indeed, IQA metrics are also neural networks as classification methods. Thus, the existing defenses with minimal changes may apply to IQA metrics. However, we compare them in a different setting, i.e. we raise the importance of measuring discrepancy and quality of images, which makes the IQA metrics more difficult to defend. Currently, few defenses that satisfy these criteria exist. For example, we included FCN filter [1] introduced by Guschin et al. as a purification approach. Currently, we are computing the Gradient Norm Regularization defense [2] and will update the paper with its results in a few days.\\t\\n2. Thank you for your suggestion. Following your request, we have updated Table 3 in the revised paper to include $L_{inf}$ and MSE indicators as additional measures of quality (line 324). According to our findings, these metrics align closely with PSNR and SSIM, confirming consistent trends across different evaluation methods.\\t\\n3. Thank you for raising the concern regarding the readability. We have addressed this issue by adding definitions for all abbreviations in Table 4 (line 394) and providing a description of the appendix structure to help readers navigate the supplementary material more easily (line 760).\\t\\n### Questions\\n1. From an adversarial perspective, decreasing IQA scores can be achieved similarly to increasing them, requiring only a sign change in the optimization step of each attack. We focused on increasing scores, as this has more practical applications in real-world misuse cases: for instance, IQA metrics are used in video streaming, where artificially raising perceived quality scores can lead to increased bitrate after transcoding, creating stress for network resources. Additionally, inflating IQA scores can be exploited to mislead benchmarks that influence project investments, especially in resource-intensive areas. Modern codecs even include optimization modes that target specific metrics (e.g., Google\\u2019s libaom encoder with its --tune-vmaf option). Given these prevalent scenarios, as mentioned in the problem formulation, we prioritized score increases in our approach.\\t\\n2. In our study, we included 4 recent attacks designed for IQA: FACPA (Shumitskaya et al. \\\"Fast adversarial cnn-based perturbation attack on no-reference image- and video-quality metrics\\\"), Optimized-UAP (Shumitskaya et al. Towards adversarial robustness verification of no-reference image-and video-quality metrics), Korhonen et al. (J. Korhonen and J. You, \\\"Adversarial attacks against blind image quality assessment models\\\"), and three variants of Zhang et al. (Zhang et al., \\\"Perceptual attacks of no-reference image quality models with human-in-the-loop\\\"). These were selected to represent a range of techniques relevant to IQA model vulnerabilities. We acknowledge that new IQA-specific attacks are emerging, and we plan to expand our benchmark in the future to include additional methods as they are developed. Adversarial attacks on IQA models represent a relatively new and growing area of research, with only a limited number of attacks tailored specifically for IQA tasks.\\t\\n3. As mentioned in our comment to weakness #2 above, we added results and conclusions for $L_{inf}$ and $L_2$ (MSE), line 324 in the revised paper.\\n4. We added more analysis regarding differences in performances on KonIQ1k and other datasets (line 482) and FPR poor results (line 474).\\n5. We updated the caption of Table 4 in the revised version of the paper (line 394)\\n6. Thank you for pointing to this issue, we added the missing variable (line 146).\\t\\n\\n[1] Gushchin et al., \\u201cAdversarial purification for no-reference image-quality metrics: applicability study and new methods,\\u201d 2024\\n\\n[2] Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, June 2024\"}",
"{\"summary\": \"This paper proposes an empirical investigation of the effectiveness of various defense techniques against adversarial attacks on image quality metrics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is decently written.\\n2. The problem under investigation is of practice importance as well described in the Introduction.\\n3. This empirical study is comprehensive.\", \"weaknesses\": \"1. A primary technical concern is label preservation (quality preservation in our context) amidst adversarial perturbations. This research evaluates various adversarial attacks, including those like MADC by Wang and Simoncelli, which may not preserve image quality after attacks. In such instances, the model is expected to provide a different quality prediction for the manipulated image, necessitating ground-truth quality assessment through human evaluation.\\n\\n2. From a conceptual perspective, the reviewer wonders how to understand certified defenses that involve voting and medianing. In classification, it is not hard to comprehend that the robustness is gained through prediction ensembling, which is again under the assumption of label preservation (see Eq. (4)). However, the quality preservation assumption is clearly not true for quality assessment. For example, consider a test image with some Gaussian blur, to perform random smoothing, we shall add Gaussian noise to it according to Eq. (4). Then, the final score is the average quality estimates of a Gaussian blurred image and a Gaussian blurred and noised image (which may be of different quality), which makes less sense to the reviewer.\\n\\n3. The observed effectiveness of the adversarial attacks (evidenced by an SRCC decrease from 0.611 to 0.477 in Table 3) appears inconsistent with prior research such as [Zhang et al., 2022a] (which reduces to random guessing). Given the limited success of these attacks, interpreting the defense results with similar SRCC values (ranging between 0.5 and 0.6) becomes challenging.\\n\\n4. Recent NR-IQA models that integrate visual and language components have not been evaluated in this study.\\n\\n5. The focus of this empirical study aligns more closely with image processing journals rather than a machine learning conference like ICLR, given that no new theories and algorithms are developed.\", \"questions\": \"The authors should work more on Points 1, 2, and 3 in an attempt to raise the reviewer's rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Core concerns are not addressed\", \"comment\": \"Thank the authors for responses. My most concerns are not addressed well. Why can decreasing IQA scores be achieved similarly to increasing them, requiring only a sign change in the optimization step of each attack? I think it is not straightforward. For the definition of adversarial attack, only increasing the score of low-quality video is only one half, missing another half.\"}",
"{\"comment\": \"Thank you for your thoughtful and detailed feedback. We appreciate the recognition of our study and analysis. We address some of the weaknesses below:\\n\\n(1) and (3):\\nAmong IQA-specific defenses, we tested the FCN filter [1], specifically tailored for IQA methods. Additionally, we are conducting experiments regarding the proposed defense (Gradient Norm Regularization[2]) and sampling strategy. We will update the paper with these results in a few days.\\nRegarding (2):\\n1. Figure 3 shows SROCCadv results for each defense method, which highly depends on SROCCclear. Table 24 in the Appendix shows SROCCclear for each dataset. KonIQ-1k consistently demonstrates higher SROCCclear values compared to AGIQA3k and KADID datasets. This can be due to two factors: a) Several IQA models (e.g., TOPIQ and CLIP-IQA+) were trained on the KonIQ-10k dataset or its subsets, giving them a natural advantage on KonIQ-1k. b) Certain IQA models, such as MetaIQA and PAQ2PIQ, generally achieve higher correlation values on KonIQ-10k, as reported in their respective studies, suggesting an inherent dataset bias.\\n2. Figure 4 (a) shows the R score, with higher scores indicating more robust IQA metrics. FPR's low R score indicates that FPR is the most vulnerable model among evaluated models. These results correlate with the previous research (https://videoprocessing.ai/benchmarks/metrics-robustness.html). This vulnerability is likely caused by its atypical architecture for the NR-IQA task, which includes a Siamese network and an attempt to \\u201dhallucinate\\u201d the features of the pseudo-reference image from a distorted one.\\n3. These insights are integrated into the revised version.\\n\\n[1] Gushchin et al., \\u201cAdversarial purification for no-reference image-quality metrics: applicability study and new methods,\\u201d 2024\\n[2] Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, June 2024\"}",
"{\"comment\": \"We would like to thank the reviewer for valuable comments and thoughtful concerns. We address them below:\\n### Concerns\\n1. Indeed, adversarial perturbations can affect image quality, so we carefully selected perturbation budgets to ensure that any changes in the quality of attacked images remain barely noticeable to human observers. To illustrate this, we provide examples of attacked, and defended images at the end of the appendix (the images in visualizations are cropped, so perturbations can be seen in them).\\nThe goal of purification defenses, as outlined in Equation 2, is to restore the attacked image and IQA metric score to the original image, whose MOS is known. Without subjective assessments, the exact quality of attacked and defended images is unknown; but attacks introduce additional noise to the source image and, by nature, cannot improve perceptual quality. Thus, we assume that the quality of defended images after two transformations (attack and defense) is slightly lower than the original. Since our attacks aim to increase metric scores, we consider the original score a reasonable proxy for the defended image's target quality. For researchers developing new defenses, having the original score as a reference can offer a practical baseline for comparing defense effectiveness, even in the absence of subjective evaluations.\\n2. Thank you for addressing this issue. This is a common challenge for all defenses adapted from classification to regression tasks. Since classifiers focus primarily on semantic features, defenses designed for them typically modify high-frequency information to remove adversarial perturbations. \\nIn our study, adversarial attacks are increasing the attacked IQA score. Therefore, the smoothed version (e.g., using a voting mechanism or the median of scores for blurred images) should align with or remain lower than the original score for the unperturbed image. Our goal is to achieve a defended quality metric that maintains strong SROCC and PLCC performance while demonstrating high robustness against attacks. When applying median smoothing, a denoising step is applied to preserve SROCC and PLCC. As shown in Table 4, the $SROCC_{clear}$ for DMS (Denoised Median Smoothing) is the highest among all evaluated methods. To sum up, certified defenses for IQA metrics can be understood as methods that aim primarily to put back (lower) the attacked IQA score. Without denoising, these defenses will significantly reduce performance (SRCC), as our experiments proved.\\n3. We apologize for making it unclear. The attack we applied from Zhang et al. has a slightly modified objective compared to its original paper. Zhang et al. maximized the absolute difference between IQA scores of original and adversarial images (making the score higher or lower), while our adaptation focuses on increasing scores. This difference in objective accounts for the discrepancy in correlation coefficients on attacked images. In our formulation, the attacked score is consistently higher than the original, which results in a smaller drop in SRCC coefficients than reported in the original paper.\\nOur approach prioritizes score increases because of their relevance in practical applications, such as inflating quality scores in streaming environments and manipulating benchmarks. We will clarify these differences in the revised paper to explain more clearly how the choice of attack objective impacts observed effectiveness.\\n4. Thank you for the suggestion. In fact, we evaluated two recent metrics published in 2024: CLIP-IQA+ and TOPIQ. CLIP-IQA+, in particular, integrates both visual and language features for quality assessment, aligning with the latest advancements in NR-IQA. We will clarify this in the paper to ensure it addresses this point directly. We agree that NR-IQA models incorporating visual and language components are a valuable addition to the field. Additionally, we plan to expand our evaluation to include more NR-IQA models, incorporating the models you suggested in future versions of the study.\\n5. While our study is empirical, it addresses critical challenges in machine learning. It contributes to advancing the understanding and development of robust models, particularly in the neural network-based Image Quality Assessment (IQA) metrics domain. Benchmarks are a cornerstone of ML research, frequently published at top A* conferences, including ICLR (e.g. [1], [2], and many more). Our work introduces the first comprehensive benchmark for evaluating adversarial defenses on IQA metrics, including 1) A publicly available dataset of adversarial images. 2) A leaderboard for reproducibility and comparison of defense methods.\\nThis benchmark provides a standardized framework for testing and improving the robustness of ML-based IQA metrics.\\n\\n[1] Q-Bench (ICLR 2024 spotlight) https://openreview.net/forum?id=0V5TVt9bk0\\n\\n[2] ViLMA (ICLR 2024) https://openreview.net/forum?id=liuqDwmbQJ\"}",
"{\"summary\": \"The paper aims to benchmark and evaluate the robustness of 30 different adversarial defense methods against 14 adversarial attacks regarding IQA metrics. It emphasizes the need for effective defenses due to the unique challenges posed by preserving image quality while defending against adversarial perturbations. It presents a comprehensive analysis of the efficiency of various adversarial defense methods for image quality assessment (IQA) metric.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1) This paper gives a comprehensive comparison of multiple defense methods against IQA under a variety of attacks, and draws a few conclusions under different scenarios.\\n(2) The detailed analysis of the trade-offs between performance and robustness in various defense strategies offers practical guidance for researchers and developers.\\n(3) The inclusion of statistical tests and evaluations of quality scores adds robustness to the findings.\", \"weaknesses\": \"(1)\\tAlthough the paper considers 30 different defense methods, it ignores some defense methods which are tailored for IQA methods specifically such as [1]. These methods should be discussed and compared in the paper, as the goal of this paper is to discuss the defense of IQA.\\n(2)\\tThe paper has evaluated different defense methods under different indicators, showing a lot of charts, but it lacks the in-depth analysis about what is the reason behind the effectiveness of different defense methods which is important. For example, the defense performance on the KonIQ-1k dataset on the right of Figure 3 exceeds the other two datasets in multiple defense methods. What is the reason? Why do many attack methods in Figure 4 achieve the worst defense performance on FPR, in terms of R robustness?\\n(3) More experimental details and analysis are expected. For example, in line 227, 50 images are selected from 1k images for attack, do different selections of attack images affect the performance of attack and defense? Does it affect the conclusion?\\n\\n[1] Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, June 2024\", \"questions\": \"Please see Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Overall Response\", \"comment\": \"Dear Reviewers,\\nWe sincerely thank you all for your thorough and constructive feedback. We appreciate the time spent reviewing our work.\\nWe are grateful for the positive reception of our work. Multiple reviewers highlighted our study\\u2019s practical contribution to research and industry and its detailed and comprehensive analysis.\\n\\nTo address the main concerns raised in the reviews, we have made several improvements.\\n* Following the _RxLB_'s suggestion, we have evaluated defense method [1] and included its results in Figure 1 and Table 4. \\n* We have conducted additional experiments to evaluate the impact of sampling 50 images from the dataset. The results of these experiments are included in Section A.2 in the Appendix. \\n* Following the _sRtC_'s suggestion, we have added $MSE$ and $L_{inf}$ metrics to Table 3 to better measure effects on image quality.\\n* To enhance the readability of the Appendix, we have added section A.0 with a brief Appendix structure.\\n* We also have corrected some mistakes and typos pointed out by reviewers.\\nThank you again for your valuable feedback!\\n\\n[1] Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, June 2024\"}",
"{\"title\": \"Addressing decreasing IQA scores and comparing it with increasing\", \"comment\": \"We apologize for the lack of clarity in our original explanation.\\nGenerally, to increase the IQA score, white-box attacks perform gradient ascending with some modification (spatial maps for Korhonen et al., proxy IQA metric for Zhang et al., etc.). To adapt these attacks to decrease IQA scores, the optimization step is reformulated to perform gradient descent. For instance, the IFGSM update step changes as follows: \\n* Increase IQA score: $x_t = x_{t-1} - \\\\alpha * sign(\\\\nabla(Loss))$\\n* Decrease IQA score: $x_t = x_{t-1} + \\\\alpha * sign(\\\\nabla(Loss))$\\n\\nThis adjustment effectively decreases IQA scores using methodologies similar to those used to increase them.\\n\\nTo evaluate the effectiveness of these approaches, we attacked 1,000 images from the KADID dataset using both settings (increasing and decreasing IQA scores). The table below summarizes the results, showing SSIM scores and $D_{score}$ percentages for decreasing and increasing IQA values across various attacks and models. The first value in each column corresponds to decreasing IQA scores, while values in parentheses indicate increasing scores.\\n\\n| Attack | IQA model | SSIM | $D_{score}$ \\n| ----------- | ----------- | ----------- | ----------- | \\nIFGSM|CLIP-IQA+|0.977(0.977)|-82.4%(66.74%)|\\nKorhonen et al.|CLIP-IQA+|0.957(0.957)|-80.3%(64.18%)|\\nZhang et al.|CLIP-IQA+|0.949(0.949)|-87.7%(65.69%)|\\nIFGSM|Koncept|0.960(0.953)|-126%(141.5%)|\\nKorhonen et al.|Koncept|0.960(0.960)|-112%(104.4%)|\\nZhang et al.|Koncept|0.949(0.948)|-118%(111.5%)|\\nIFGSM|MANIQA|0.961(0.952)|-98.2%(109.6%)|\\nKorhonen et al.|MANIQA|0.965(0.963)|-96.7%(94.81%)|\\nZhang et al.|MANIQA|0.952(0.952)|-97.0%(104.8%)|\\nIFGSM|SPAQ|0.965(0.962)|-72.4%(82.9%)|\\nKorhonen et al.|SPAQ|0.955(0.951)|-63.2%(78.97%)|\\nZhang et al.|SPAQ|0.944(0.941)|-68.7%(89.2%)|\\nIFGSM|TOPIQ|0.974(0.972)|-103.%(108.8%)|\\nKorhonen et al.|TOPIQ|0.958(0.958)|-100%(91.72%)|\\nZhang et al.|TOPIQ|0.949(0.952)|-107%(92.9%)|\\n\\nThe results demonstrate that the success rates for increasing and decreasing IQA scores are largely comparable across models and attacks. SSIM values are almost identical and $D_{score}$ are aligned, highlighting that attacks work very similarly in both cases. Given the comparable effectiveness and the limited practical applications of decreasing IQA scores, we decided to concentrate on attacks only increasing IQA scores, as was done in other studies [1, 2].\\n\\n[1] Shumitskaya et al., \\u201cIOI: Invisible One-Iteration Adversarial Attack on No-Reference Image-and Video-Quality Metrics,\\u201d In Proceedings of the 41st International Conference on Machine Learning (ICML), pages 45329\\u201345352. PMLR, 2024 \\n\\n[2] Antsiferova et al., \\u201cComparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks,\\u201d in Proceedings of the 2024 AAAI Conference on Artificial Intelligence, doi:10.1609/aaai.v38i2.27827\"}",
"{\"summary\": \"The paper attempts to set up benchmarking for adversarial attacks and defenses in the context of No-Reference Image Quality Assessment algorithms. The coverage of the work seems to be good\\u201429 defense strategies, 14 attack methods, and 9 IQA methods. Lots of experiments (as needed) and results are provided, as expected from a benchmarking paper.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Major strengths :\\n\\n1. The paper presents the first comprehensive benchmark of 29 defense methods,14 attack methods, and 9 IQA methods for NR-IQA\\n2. Multi-dimensional approach to evaluation: robustness, preserving correlation with human perception (SRCC/PLCC), and quality of the image with respect to original (PSNR, SSIM)\\n3. Practical Contribution to Research and Industry -> Good work on setting up a public benchmark.\", \"weaknesses\": \"Major weaknesses :\\n\\n1. The paper is a valuable resource for the IQA community. But in terms of technical novelty, it would have been nice to have an attack/defense method with some motivation for the IQA task. I feel this paper would have been more suitable for a Benchmarking Track, but I understand ICLR lacks such a track and had to submit it to the main conference track\\n2. The appendix provides many results, but it is very difficult to connect them to the main text, which points to the paper's poor organization.\\n3. The authors should add LPIPS to the results in Table 3 (and other similar tables) along with PSNR and SSIM. PSNR is not a perceptual metric and can be reliable, leaving SSIM as the only metric. It is better to report both SSIM and LPIPS scores.\", \"minor\": \"1. Paper formatting needs to improve, and content organization can also be better. For example, Fig 1 does not discuss page 8.\\n2. Section 3.1 under Adversarial defenses: It is better to divide the paragraphs into different types. This will make reading the paper much easier.\", \"questions\": \"1. In section 3.2, \\\"clustering the KonIQA dataset by spatiotemporal complexity.\\\" - Could you please explain the temporal aspect of images?\\n2. Given the high computational demands of certified defenses and 10 images being used? How would you expect the results to vary as you sample different sets of 10 images?\\n3. Logistics questions on the leaderboard :\\n How do you plan to maintain the leaderboard, and will there be mechanisms for incorporating new defense/attack techniques over time?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Not Applicable\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"To further address your concern, we have added Section A.7 in the Appendix, where we provide additional analysis and insights into the effectiveness of different defense methods. Below, we summarize the key findings:\\n\\n* Role of Geometric Transformations and Distribution Awareness:\\n\\nDefenses that incorporate geometric transformations or account for the differences between clean and attacked image distributions perform significantly better than those that do not. For instance, transformations like Random Rotate introduce variability that helps mitigate adaptive attacks, while static transformations are less effective due to their predictability.\\n\\n* Effectiveness of Compression Techniques:\\n\\nCompression-based methods are highly effective as they remove high-frequency noise while preserving the underlying image structure. This explains their strong performance against attacks that exploit fine-grained perturbations.\\n\\n* Performance of Denoising Methods:\\n\\nMethods like MPRNet and Real-ESRGAN demonstrate moderate effectiveness. However, their limitations stem from being trained on simpler noise types, highlighting the need for fine-tuning on adversarial perturbations to improve robustness.\\n\\n* Diffusion-Based Models:\\n\\nWhile diffusion-based models offer tunable strengths and have shown success in classification tasks, they face challenges in quality assessment tasks due to the introduction of their own artifacts, which are degrade perceived quality and worsen correlations.\\n\\n* Adaptive Attack Mitigation:\\n\\nHigh-randomness approaches combined with geometric transformations, such as Random Rotate, are particularly effective at mitigating adaptive attacks. In contrast, methods relying on static transformations are less adaptive and thus less robust.\\n\\nThese observations provide a foundation for future research into improving defense strategies. Section A.7 expands on these findings and includes potential directions for further investigation, such as fine-tuning denoising models and combining multiple defenses to leverage their complementary strengths. We hope this additional analysis addresses your concerns and welcome further feedback.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thanks for the reply. It would be good to see results related to Q#2.\"}",
"{\"title\": \"Additional comments\", \"comment\": \"Thank the authors for responding to the comments; the reviewer highly appreciates it.\", \"regarding_point_1\": \"The reviewer appears to have overlooked that the attack is solely directed toward increasing the predicted quality score (i.e., larger scores indicate better-predicted quality). Under this assumption, the attack becomes a specific case of Zhang et al., where $\\\\lambda = \\\\infty$, and can be made unbounded and therefore trivial: one could simply compute $x^\\\\star =\\\\arg\\\\max_x q(x)$. The resulting image would likely contain additional visible noise, but the predicted quality score would still be maximized. All NR-IQA models would be inherently vulnerable to this type of attack.\", \"regarding_point_2\": \"The reviewer suggests that the key distinction does not lie between classification and regression. Instead, it still boils down to whether the added perturbation preserves the label or not. If the perturbation is label-preserving, ensemble methods (e.g., averaging or smoothing predictions) can be applied to enhance robustness against adversarial attacks.\", \"regarding_point_3\": \"Thank the authors for the clarification. These details should be incorporated into the main text for better clarity.\\n\\nIn light of the additional comments, the reviewer has decided to maintain the current rating.\"}",
"{\"title\": \"No new insight to adversarial defense\", \"comment\": \"Overall, this work does not provide new insight for adversarial defense on IQA. It synthesizes existing adversarial attacks on IQA and ensembles some testing results, lacking enough novelty to reach the conference bar. I keep my rating.\"}",
"{\"comment\": \"We have conducted additional experiments to evaluate the impact of sampling different sets of 10 images on certified defenses. We have sampled 10 different sets of images from KONIQ-10k following the sampling strategy described in the paper, evaluated certified defenses for the PAQ2PIQ IQA model and calculated mean and 95% confidence intervals for $D_{score}$, $R_{score}$ and Cert.R./Cert.RD. These results are summarized in the Table below:\\n| Certified Defense| Robustness ($R_{score}\\\\uparrow$) | $D^{{(D)}}_{{score}}$$\\\\downarrow$ | $Cert. R \\\\uparrow$ / $Cert. RD \\\\downarrow$\\n| :-| :- | --------: | -: | \\nRS | $5.54 \\\\pm 0.23$ | $5.44 \\\\pm 1.57$ | $0.179 \\\\pm 0.01 / \\\\infty$ \\nDRS | $1.67 \\\\pm 0.13$ | $14.9 \\\\pm 4.20$ | $0.172 \\\\pm 0.01 / \\\\infty$ \\nDDRS | $5.71 \\\\pm 0.25$ | $2.63 \\\\pm 0.59$ | $0.165 \\\\pm 0.01 / \\\\infty$ \\nDP | $5.60 \\\\pm 0.26$ | $2.49 \\\\pm 0.54$ | $0.161 \\\\pm 0.01 / \\\\infty$ \\nMS | $1.33 \\\\pm 0.17$ | $6.97 \\\\pm 1.38$ | $0 / 2.83 \\\\pm 0.36$ \\nDMS | $1.41 \\\\pm 0.13$ | $7.49 \\\\pm 1.41$ | $0 / 2.28 \\\\pm 0.33$ \\n\\nThe results align closely with those presented in the main paper. For example, RS and DRS perform best in Cert.R, while RS, DDRS and DP show the best $R_{score}$. The consistency across samples is further supported by narrow confidence intervals, which suggest minimal variability between different sets of sampled images. Given the current results, we consider the findings in the main paper representative across different image subsets.\"}",
"{\"metareview\": \"The authors present a comprehensive study benchmarking the robustness of 29 defense methods for 14 adversarial attacks against 9 different image quality assessment (IQA) metrics. While the study is comprehensive and of value to the IQA community, there is too little new insight generated to be a good fit for ICLR.\\n\\nIt is recommended to consider sending the work to an image processing journal or to a benchmarking track of a major conference.\", \"additional_comments_on_reviewer_discussion\": \"During the review phase, authors addressed concerns around legibility and organization of the paper, and added additional missing experiments. However, all reviewers cited the lack of new insights as one of the main weaknesses of the paper, which was not addressed satisfactorily by the authors.\"}"
]
} |
1PZt5nFlzH | Size-aware Compression of 3D Gaussians with Fine-grained Mixed Precision Quantization | [
"Shuzhao Xie",
"Jiahang Liu",
"Weixiang Zhang",
"Shijia Ge",
"Sicheng Pan",
"Chen Tang",
"Yunpeng Bai",
"Zhi Wang"
] | In this paper, we propose a method to automatically select hyperparameters to compress 3D Gaussians to a target file size while maximizing visual quality. We iteratively search for a hyperparameter configuration until the file size meets the specified budget. However, existing compression frameworks require completing the entire compression process to determine the compressed file size, which is time-consuming. To accelerate this, we design a tailored size estimator for frameworks that can determine hyperparameters without requiring fine-tuning. Although the finetuning-free frameworks are more predictable, they typically underperform compared to fine-tuning-based approaches, which utilize end-to-end differentiable structures to achieve superior results. To close this performance gap, we propose a mixed-precision quantization strategy that exploits the heterogeneity of attribute channels by compressing each channel with different bit-widths. The resulting combinatorial optimization problem is efficiently solved using 0-1 integer linear programming. Additionally, we partition each attribute channel into blocks of vectors, quantizing each vector based on the optimal bit-width determined in the previous step. The block length is then determined via dynamic programming. Our method identifies hyperparameter settings that meet the target file size within 70 seconds, outperforming state-of-the-art methods in both efficiency and quality. Extensive experiments demonstrate that our approach significantly enhances the performance of fine-tuning-free methods, with its upper-bound performance comparable to that of fine-tuning-required techniques. | [
"3D Gaussian Splatting",
"Mixed-precision Quantization",
"Compression"
] | https://openreview.net/pdf?id=1PZt5nFlzH | https://openreview.net/forum?id=1PZt5nFlzH | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"f6ggh7TEpx",
"X1FFSMS3yB",
"NniYqBPoGt",
"CiiwqyBBuO",
"2CFVo3bpKr"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1730211810004,
1730716837124,
1731434153292,
1730701392405,
1730362431095
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission785/Reviewer_EpWC"
],
[
"ICLR.cc/2025/Conference/Submission785/Reviewer_U8m2"
],
[
"ICLR.cc/2025/Conference/Submission785/Authors"
],
[
"ICLR.cc/2025/Conference/Submission785/Reviewer_JTqY"
],
[
"ICLR.cc/2025/Conference/Submission785/Reviewer_3Yz8"
]
],
"structured_content_str": [
"{\"summary\": \"In this paper, the authors propose a mixed-precision quantization method for 3DGS compression. Specifically, different bit-widths are assigned to different attribute channels of the gaussians. In addition, each attribute channel is partitioned into blocks of vectors. While previous methods require completing the entire compression process to determine the compressed file size, the proposed method introduces a size estimator to determine the model size within 70 seconds. Experiments show that the proposed method improves the performance of fine-tuning-free approaches.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The motivation is clear and this paper is easy to follow.\\n(2) Superior performance as compared with previous methods.\", \"weaknesses\": \"(1) My major concern is about the marginal performance gain. In Table 2, it seems the proposed method is even inferior to HAC, especially on the Tanks&Temples dataset. As compared to HAC, our-small has similar model size but produces lower performance on all metrics. For our-large, superior performance is acheived at the cost of much larger model size. So I wonder the superiority of the proposed method as compared to HAC. I can see that the proposed method is finetuning-free, but the authors should clarify which methods are fair results as compared to the proposed method.\\n(2) Following the first comment, as shown in Fig. 4, the PSNR score of the proposed method seems to be lower than that of HAC under the same size. This further shows that the proposed method does not produces either higher accuracy or better efficiency. So the effectiveness of the proposed method seems to be further highlighted.\\n(3) As mix-precision quantization is one of the major contributions for the proposed method, the bit-widths for different attribute channels should be discussed, which could be an interesting point for follow-up researchers. It would be better if the bit-widths for different attribute channels under different buget can be further analyzed.\\n(4) Typos: \\n- Line 40: 5.2710^6?\\n- MPQ is not defined in the paper\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a size-aware 3DGS compression approach designed to achieve accurate size estimation. Building upon the MesonGS framework, the authors first develop a size estimator to obtain precise size measurements. To enhance performance further, a mixed-precision quantization strategy that incorporates 0-1 integer linear programming and dynamic programming is proposed. Experimental results demonstrate that the proposed method achieves superior compression quality compared to existing approaches while requiring less search time.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The proposed method significantly reduces search time relative to existing approaches while ensuring accurate size estimation and strong compression performance.\", \"weaknesses\": \"1. The novelty of this paper is limited. The overall coding architecture closely resembles that of MesonGS, with only marginal innovations. The primary contributions consist primarily of technical enhancements, specifically 0-1 integer linear programming and dynamic programming, rather than presenting novel research insights.\\n2. The application of the proposed method is confined to MesonGS, which restricts its potential use cases. To demonstrate the effectiveness of the method, it would be beneficial to apply it to multiple baseline models.\\n3. The performance gains attributed to the proposed method are not adequately analyzed. Given that the core idea and methodology focus on accurate size estimation, the substantial performance improvement over MesonGS (as shown in Table 2, with Mip-NeRF 360 increasing from 26.68 dB to 27.65 dB) appears insufficiently justified. A detailed analysis of the contribution of each component, including the transition from 3DGS to Scaffold-GS, the proposed mixed-precision quantization strategy, and the fine-tuning process, is warranted.\\n4. There are several writing issues. For example, \\u201cPNSR\\u201d in Table 1 should be \\u201cPSNR\\u201d. Additionally, notations should be defined upon their initial appearance, such as \\u201cAi\\u201d in Equation (4) and the \\u201c\\u2299\\u201d symbol in Equation (5).\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you to all reviewers and the Area Chair for your thoughtful and detailed feedback on our submission. We are very grateful for the time and effort each of you has dedicated to evaluating our work. Your insights have provided us with valuable directions to improve our research. After careful consideration, we have decided to withdraw the paper in order to address these suggestions more comprehensively. We look forward to using this feedback to refine our work and hope to submit an improved version in the future. Thank you again for your invaluable support and constructive input.\"}",
"{\"summary\": \"The paper presents a novel approach to size-aware compression of 3D Gaussians, focusing on fine-grained mixed precision quantization to optimize file size while maximizing visual quality. The authors propose a framework that includes several key components: the selection of a base model (ScaffoldGS), a compression framework (MesonGS), and a size estimator.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) This is a well-written paper.\\n(2) The proposed method is compared with various methods. The experiments are complete and convincing \\n(3) Some visualizations are helpful to understand.\", \"weaknesses\": \"\\uff081\\uff09Lack of FPS comparisons\", \"questions\": \"\\uff081\\uff09In line 202, how do you obtain the average important score of anchors in detail?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a method for compressing 3D Gaussian models to meet target file sizes while preserving visual quality. The key contributions include a quick size estimator for compression prediction, a hierarchical mixed precision quantization approach using integer linear programming and dynamic programming, and a complete compression pipeline that finds optimal parameters 100x faster than existing methods. The approach is validated on standard datasets, showing competitive results in both compression speed and visual quality metrics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Formulation of the compression problem that explicitly considers target file sizes, addressing a practical need not well-handled by existing methods\", \"Combination of size estimation with mixed precision quantization, offering a new approach to balancing compression and quality\", \"The original use of 0-1 ILP for bit-width selection in 3D Gaussian compression, adapting techniques from neural network quantization to a new domain\", \"Clear justification for design choices (e.g., choosing MesonGS over other frameworks due to size stability)\", \"100\\u00d7 speedup in parameter search makes the method much more practical for real-world applications\"], \"weaknesses\": [\"I'm concerned about the paper's core assumption that attribute channels are independent during quantization. This feels like a significant oversimplification without any supporting evidence. I would like to see some experiments showing whether there's a correlation between channels' quantization errors and how this impacts the final results.\", \"Why only test on static scenes with a single file size target (30MB)? For a paper claiming to be \\\"size-aware,\\\" I'd expect to see results across various target sizes and more challenging scenarios like dynamic scenes. I'm particularly curious how their method handles SH coefficients under different lighting conditions.\", \"The performance analysis feels incomplete. We get plenty of quality metrics, but what about memory usage during compression? Also, they mention using CUDA for speed-up but don't explain the implementation details - this kind of information is crucial for anyone trying to replicate their work.\", \"The paper shows how their method works but doesn't really explain how to use it. How do we choose the step size U or the number of blocks K in practice? Table 6 shows it's robust to different K values, but I'm still wondering what values I should pick for my use case.\", \"I'm worried about error propagation in their system. What happens when errors from the inter-attribute stage combine with those from the intra-attribute stage? And how does the method behave with very small target sizes? Some analysis of failure cases would really help understand the method's limitations.\"], \"questions\": [\"Could you provide more details about your method's performance on dynamic scenes, particularly regarding temporal coherence and compression consistency between frames?\", \"What are the memory-speed trade-offs in your compression pipeline, and how does the peak memory usage compare to existing methods?\", \"Have you identified any quality cliffs or failure cases where the compression performance degrades significantly (e.g., minimum achievable file size, complex geometries, or detailed textures)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1PDz4Ny1N2 | Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation | [
"Chen Xu",
"Yuxin Li",
"Wenjie Wang",
"Liang Pang",
"Jun Xu",
"Tat-Seng Chua"
] | Group max-min fairness (MMF) is commonly used in fairness-aware recommender systems (RS) as an optimization objective, as it aims to protect marginalized item groups and ensures a fair competition platform. However, our theoretical analysis indicates that integrating MMF constraint violates the assumption of sample independence during optimization, causing the loss function to deviate from linear additivity. Such nonlinearity property introduces the Jensen gap between the model's convergence point and the optimal point if mini-batch sampling is applied. Both theoretical and empirical studies show that as the mini-batch size decreases and the group size increases, the Jensen gap will widen accordingly. Some methods using heuristic re-weighting or debiasing strategies have the potential to bridge the Jensen gap. However, they either lack theoretical guarantees or suffer from heavy computational costs. To overcome these limitations, we first theoretically demonstrate that the MMF-constrained objective can be essentially reformulated as a group-weighted optimization objective. Then we present an efficient and effective algorithm named FairDual, which utilizes a dual optimization technique to minimize Jensen gap. Our theoretical analysis demonstrates that FairDual can achieve a sub-linear convergence rate to the globally optimal solution and the Jensen gap can be well bounded under a mini-batch sampling strategy with random shuffle. Extensive experiments conducted using six large-scale RS backbone models on three publicly available datasets demonstrate that FairDual outperforms all baselines in terms of both accuracy and fairness. | [
"Jensen Gap",
"Recommender Systems",
"Max-min Fairness"
] | Accept (Poster) | https://openreview.net/pdf?id=1PDz4Ny1N2 | https://openreview.net/forum?id=1PDz4Ny1N2 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xw9yt6dxqW",
"ubQjwkaqFs",
"uU67miKtR4",
"uRxBpEuA7P",
"tkr6Agcq1s",
"rYrP93Q8kO",
"pxyKJMds0E",
"oRubiLbojK",
"lRy72WoX5h",
"kdTcqtccLq",
"k0VMeJmhhZ",
"gSt8iRLVly",
"g5CeNiMAnR",
"dAGm0XnnjK",
"Z2R5YWp10j",
"XiAb9qc8Pl",
"WdLAAvXcCH",
"WAL8hVT1zT",
"UMfJZ9sXgz",
"SaNYIGDVJe",
"RZEUd9WxPC",
"QbIhhbfWIT",
"OyxxaCP9GO",
"O4zBrWzA4w",
"IgRdCTdJDy",
"7bMV1e7Bje",
"6qWDtxIQe0",
"4K6acl3hne",
"2vkkN2tR0f",
"1xqLWZ808M"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732695794097,
1732214408987,
1732181929923,
1730510615478,
1732183028346,
1732182903115,
1730671599310,
1732673527190,
1730564129573,
1730706074064,
1733132306439,
1733163197388,
1737523743143,
1732182311716,
1733128301471,
1732182432800,
1732181791640,
1735042271438,
1732182871943,
1730411081978,
1732291973238,
1732522874454,
1732182675977,
1732237601047,
1732190777478,
1732182575974,
1732286995313,
1732440039256,
1732182645745,
1732182467597
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_EADi"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_SoKV"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_w1tq"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_w1tq"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_REQg"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_1uT1"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_REQg"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Area_Chair_WcE4"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_1uT1"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_EADi"
],
[
"ICLR.cc/2025/Conference/Submission6075/Reviewer_SoKV"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6075/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Reviewer REQg,\\n\\nWe have revised the PDF and would like to know if our experimental results and discussions address your concerns. Since the rebuttal period has been extended, please feel free to reach out if you have any additional questions or concerns to discuss.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"comment\": \"Thanks for your hard work and final suggestions, we will not label metrics such as NDCG, MRR, and MMF with percentage signs (%) in the revised version. Your suggestion really helps us to improve our paper.\\n\\nBest,\\nAuthors\"}",
"{\"title\": \"Weakness regarding to the quality and clarity of the presentation\", \"comment\": \"W1.1: For Formula 1, the meaning of the symbol $n_i$ seems missing. Why is it emphasized that there are users with more than K interactions? Why must the most recent K interactions be kept instead of other numbers?\\n\\n* A1.1: Apologies for the confusion. \\n\\n$n_i$ is defined in line 129, representing the number of groups to which item $i$ belongs. We will recall this definition when introducing Formula 1 for clarity. \\n\\nFor the $K$ interactions, apologies for the confusion in notation. $K$ should be replaced with another parameter $H$, which represents the truncated user historical behavior numbers. We have conducted experiments (see Table 4 in the appendix) to demonstrate performance changes with respect to $H$. We will make the necessary corrections.\\n\\nW1.2: Dividing the |U| users into |U|/B batches and performing gradient descent methods on each batch\\u201d is also confusing. In my experience, each batch only contains a subset of users, and all interactions of each user are included. Secondly, the following description seems to use a random batch sampling, which is unclear.\\n\\n* A1.2: Sorry for the confusion. \\n\\nFor the first question, we indeed use the settings you mentioned: each batch contains only a subset of users, and all interactions are utilized. Here, we simply wanted to emphasize that in each epoch, we apply random batch sampling with a batch size of B. To clarify, we will revise the sentence to: We apply the regular mini-batch sampling strategy with a batch size of B.\\n\\nSecondly, we emphasize this aspect because our algorithm does not depend on the order in which users arrive, unlike other debiasing online learning algorithms that may require specific user training sequences. We will compare our approach with existing methods and highlight that our algorithm can be seamlessly integrated into regular mini-batch strategies, offering broader applicability.\\n\\nW1.3: Regarding the second paragraph of Section 4.2 and Figure 1, as far as I understand, the convergence of the same group should be examined under different batch sizes to obtain the desired observations and conclusions.\\n\\n* A1.3: Thank you for your suggestions. Figure 1(a) was conducted with the same group size (G=7) under different batch sizes, while Figure 1(b) was conducted with the same batch size (B=32) under different group sizes. We will include these details in the experimental descriptions for better clarity.\\n\\nW1.4: For the last paragraph of Section 5.2.1, the meaning of _P_ is missing.\\n* A1.4: Apologies for the confusion. $P(w)$ refers to the predicted probability of the word $w$ generated by the LLMs. We will clarify this in the revised version.\\n\\nW1.5: Based on the current description of Section 5.2.2, I can't find any instructions on handling the first batch since it lacks a pre-batch to compute $g$. Secondly, does the operation of sampling Q items bring noise or instability? This requires more discussion and experimental analysis.\\n* A1.5: Sorry for the confusion. \\n\\nFor the first question, in Algorithm 1 line 7, we initialize $g$ as 0, which will not make an effect on first batch. We will illustrate it in Section 5.2.2. \\n\\nFor the second question, intuitively, a larger Q provides a more accurate gradient estimation but also incurs higher computational costs. We have conducted experiments to evaluate the impact of Q and will present the results. The results were conducted under the same settings of analysis section.\\n\\n\\n|Q| 50 | 100 | 200 | 300 | 400 | full (unbiased) |\\n|--|--|--|--|--|--|--|\\n| NDCG (%)| 1.08 | 1.08 | 1.15 | 1.19 | 1.19 | 1.29|\\n| MMF (%)| 1.2 | 1.28 |2.18 |2.10 |2.29 |2.31|\\n\\n\\nFrom the results, we observe that increasing the sample value Q leads to improvements in both accuracy and fairness performance. However, in LLM-based recommender systems, a larger Q significantly increases training time (with __each item requiring an additional 1.5 seconds__) and storage space. Different applications should select appropriate Q values based on their specific accuracy, fairness requirements, and computational constraints. We will include these experimental results and discuss them in the appendix.\\n\\nW1.6: The current placement of figures and tables does not facilitate reading and needs to be placed as close to the corresponding description as possible.\\n* A1.6: Thanks for your suggestions! We will put the figures and tables more close to the corresponding description as possible in the revised version.\"}",
"{\"summary\": \"In this paper, the authors claimed that current group max-min fairness (MMF) methods would introduce a Jensen gap between the model\\u2019s\\nconvergence point and the optimal point. The authors analyzed the existence of Jensen gap theorically and emprically. Then, the authors proposed FairDual, a dual-optimization approach that guaranteed a bound for the Jesen gap. They conducted comprehensive experiments to show the effectiveness and efficiency of FairDual compared to baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The motivation is well established with theorical and emprical analysises of the Jesen gap in Section 4.\", \"The method is solid with a guaranteed bound for Jesen gap, and the experiment also showed that the proposed method indeed has lower gap than other baselines.\", \"The authors conducted complete and comprehensive evaluations, including the effictiveness, Jensen gap analysis, case study, training efficiency and parameter analysis.\"], \"weaknesses\": [\"Baselines. The authors mentioned that there are two types of approaches to bridge the Jensen gap, re-weighting and optimization-based methods. But the authors mostly compared only re-weighting methods in the experiments, while ignoring optimization-based methods they mentioned in the introduction part, such as Abernethy et al. 2022, Cousins 2022, Demidovich et al., 2023 and Agarwal et al., 2018. I suggest the authors to add state-of-the-art baselines in optimization-based methods mentioned in the paper.\"], \"questions\": [\"Can authors explain is there a reason that in some cases the MRR is not statistically significant as shown in Table 1 and 2? For example, the MRR in the top-5 results of RecFormer and BigRec is not significant and no improvement, while the improvement of NDCG and MMF is significant. Can authors give some insights on this observation?\", \"In the visualization of Figure 3(c), the differences in the patterns of the two figures are not quite obvious. The classification boundary of FairDual also seems to exist. Could the authors provide some quantitative results to distinguish the different patterns, such as divergence in the two distributions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your hard work and valuable questions.\\n\\nW1. It'd be helpful if the authors could provide more interpretations for Theorem 4, the main theoretical result and, in particular, comment on their technical contributions. What is the main technical novelty in attaining this theoretical result? A proof sketch could also be helpful.\\n* A1: The main technical novelty lies in **transforming the Jansen gap problem into its dual form** and leveraging dual gradient learning techniques to bound the **complementary slackness** and establish the theoretical bounds. Next is the proof sketch:\\n\\n\\nFirstly, we write **the dual form of the Jansen gap**: \\n\\n$J(B) = |\\\\sum_{j=1}^Nw({\\\\mu}^j)-w({\\\\mu})|$ and $w({\\\\mu}^j)= ({\\\\mu}^j)^{\\\\top}({e}-({A}^j)^{\\\\top}{l}^j)$\\n\\nThen we utilize the **online gradient descent** to bound the **complementary slackness** of $w({\\\\mu}^j)$:\\nThe gradient of $w({\\\\mu}^j)$ can be bounded as $\\\\|\\\\widetilde{{g}}^j\\\\|_2^2 \\\\leq L|\\\\mathcal{G}|^2$.\\n\\nFinally, we put them together will have our conclusion. We will re-organize the proof and state the main technical novelty.\", \"w2\": \"Following the point above, in the numerical experiments, there appears to be some non-monotonic variation of the Jensen gap w.r.t. the batch size. I wonder if the authors can comment on why this is the case. Is this consistent with the theoretical results?\\n* A2: Thanks for the question. \\n\\nFirstly, our previous monotonic variation of the Jensen gap w.r.t. the batch size is based on ideal fairness aware algorithms. However, in real algorithms, some bias (e.g. gradient and online bias) will influence the process, making it not strict monotonic. \\n\\nSecondly, our theoretical results provide an upper bound on the Jansen gap, and we observe that the Jansen gap in our method exhibits a monotonic trend with respect to batch size and group size. This aligns with the theoretical results, as the theory does not strictly require the gap to be perfectly monotonic.\", \"w3\": \"Could the results extend to alternative fairness constraints beyond max-min fairness?\\n* A3: Thanks for the question. In Theorem 1, we demonstrate that our optimization objective is equivalent to the power-family fairness framework, which encompasses mainstream fairness definitions such as Entropy Fairness, $\\\\alpha$-Fairness, and Theil Index [1]. Consequently, our method is __highly adaptable and can be generalized to various fairness objectives within this framework__. \\n\\n\\nWe also test the performances of other fairness metric Gini Index Compared to the baselines (Table 1) on MIND datasets. Note smaller Gini Index means more fairness.\\n|Models|GINI@5|GINI@10|GINI@20|\\n|-------------------|--------|-------|--------|\\n|Prop|0.488|0.488|0.472|\\n|DRO|0.511|0.476|0.487|\\n|SDRO|0.503|0.478|0.453|\\n|IFairLRS|0.458|0.454|0.448|\\n|**FairDual(ours)**|**0.444**|**0.450**|**0.441**|\\n\\nFrom the results, we can observe that our model can still perform good on other fairness metrics.\\n\\nWe believe our paper can help other researchers explore its applicability to various loss functions, and other fairness metrics, which is also our contributions to the communities.\\n\\n\\n[1] Lan, T., & Chiang, M. (2011). An axiomatic theory of fairness in resource allocation. _George Washington University, http://www. seas. gwu. edu/tlan/papers/fairness. pdf, Tech. Rep_.\", \"w4\": \"What is the computational complexity of the proposed algorithm and how does it compare with the other baselines?\\n* A4: Thanks for the question. \\n\\nFirstly, we all have parameters of the same magnitude (i.e., group size parameters, which are in the range of hundreds and negligible compared to the backbone). Our method only requires additional space for Q item embeddings and extra training time (Q * 1.5s). Applications can trade off Q based on available resources:\\n\\n|Q| 50 | 100 | 200 | 300 | 400 | full (unbiased) |\\n|--|--|--|--|--|--|--|\\n| NDCG (%)| 1.08 | 1.08 | 1.15 | 1.19 | 1.19 | 1.29|\\n| MMF (%)| 1.2 | 1.28 |2.18 |2.10 |2.29 |2.31|\\n\\nSecondly, as mentioned in Table 3 of the original paper, although there is an additional time overhead per round, our convergence speed accelerates by 30% compared to the best baseline. This 30% improvement in convergence speed is highly significant for industrial applications, along with enhanced performance. \\n\\nWe will add the discussion in the revised paper. Thanks for your question again.\"}",
"{\"title\": \"Other questions\", \"comment\": \"W2: Can authors explain is there a reason that in some cases the MRR is not statistically significant as shown in Table 1 and 2? Can authors give some insights on this observation?\\n* A2: This is an interesting question. In MRR, it actually cares about the inverse of the position of the __first relevant item__ while NDCG and MMF computes the overall ranking lists. \\nFrom the results, we observe that the first relevant item remains almost unchanged across methods; however, FairDual excels at positioning the second and subsequent relevant documents (often belonging to smaller groups) higher in the ranking.\\n\\nThis indicates that while the first relevant items typically achieve higher scores, FairDual improves the middle-ranked items in terms of both fairness and accuracy. We will add this discussion to the revised paper\\u2014thank you for raising such a thoughtful question.\", \"w3\": \"In the visualization of Figure 3(c), the differences in the patterns of the two figures are not quite obvious. Could the authors provide some quantitative results to distinguish the different patterns, such as divergence in the two distributions?\\n* A3: Thanks for the suggestion. We test the KL divergence of two different groups under UNI and our method FairDual: \\n\\n| Models| KL divergence|\\n|------------------|--------|\\n| UNI| 0.113|\\n| FairDual | 0.083|\\n\\nFrom the results, we observe that our method, FairDual, exhibits a smaller KL divergence, indicating that the embeddings learned through our approach bring the embeddings of the tail group closer to those of the head group, thereby enhancing fairness. We will incorporate these results into the revised version.\"}",
"{\"summary\": \"This paper analyze the Group max-min fairness (MMF) constrained optimization problem in recommendation. The authors first explain the Group MMF constrained objective as a non-linear polynomial form, indicating that the Jensen gap is non-negligible in mini-batch sampling based optimization (i.e., the gap between the overall loss and the sum of batched loss). To bridge the Jensen gap, this paper propose a dual optimization method called FairDual. Specifically, they rewrite the objective as a group-weighted BCE loss, and utilize the dual mirror gradient descent algorithm to optimize this loss. They further conduct experimental validation of FairDual's effectiveness and provide a detailed analysis of its proposed theoretical advantages.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The Group MMF studied by the authors is significant for recommendation fairness research.\", \"The motivation of the paper is clear, the algorithm introduction is straightforward, and the experimental analysis is detailed.\", \"The paper employs dual optimization to separate MMF and predicted score, resulting in a simple form of group-weighted BCE loss, and uses the dual mirror gradient descent algorithm for optimization, which is somewhat novel.\"], \"weaknesses\": [\"Theoretical proofs contain parts that require clarification, and the writing of the proof needs further improvement.\", \"The authors' analysis of the proposed group-weighted BCE loss is insufficient.\", \"In the implementation of FairDual algorithm, sampling-based ranking may introduce bias that is not adequately discussed.\", \"The authors conducted experiments on only two datasets, which may not be sufficient to demonstrate the algorithm's generalization.\"], \"questions\": [\"**Major Concerns:**\", \"**Questions about Theoretical Analysis and Algorithm.** I have the following questions about the theoretical analysis and algorithm in this paper that need clarification:\", \"In the proof of Theorem 1 (Appendix A), why is Problem (15), i.e., $\\\\min\\\\mathbf{1}^\\\\top\\\\hat{\\\\boldsymbol{A}}^\\\\top\\\\boldsymbol{w}+\\\\lambda\\\\max_{g\\\\in\\\\mathcal{G}}\\\\gamma_g(\\\\hat{\\\\boldsymbol{A}}^\\\\top\\\\boldsymbol{w})_g$ , equivalent to Problem (2), i.e., $\\\\min\\\\boldsymbol{b}^\\\\top(\\\\hat{\\\\boldsymbol{A}}^\\\\top\\\\boldsymbol{w})^{1+t}$ ? Specifically, considering the limit in the function $g(\\\\cdot; \\\\infty)$ , how is the constant $t$ determined? Providing an explicit solution for $t$ and $\\\\boldsymbol{b}$ is important for Theorems 1 and 2.\", \"The organization of Lemma 2, Lemma 3, and Theorem 3 (Appendix D-F) is somewhat disorganized. I suggest reorganizing these proofs, e.g.:\", \"Place Lemma 3 before Lemma 2, as the conclusion of Lemma 2 depends on Lemma 3.\", \"Rewrite the proof of Lemma 2 to explain why the conclusion can be derived from $r(\\\\boldsymbol{\\\\mu} + c\\\\boldsymbol{b}) < \\\\infty$ (i.e., Lemma 3).\", \"The group-weighted form (4) of the Group MMF-constrained objective in Theorem 3 is concise, but its weight $\\\\boldsymbol{s}_g$ is not. The authors should provide some intuitive explanations for the weight $\\\\boldsymbol{s}_g$ to better elucidate the experimental phenomena (Case study in Section 6.3). For instance, under what circumstances is $\\\\boldsymbol{s}_g$ larger, and when is it smaller?\", \"In the calculation of $\\\\widetilde{w}$, the authors randomly sample $Q$ items and set $\\\\widetilde{\\\\boldsymbol{w}} _ b=\\\\sum _ {k=1}^K(\\\\boldsymbol{E} ^ j\\\\boldsymbol{e} _ {u _ b}) _ {[k]}$ (cf. line 12 in Algorithm 1, and line 358). I am primarily concerned about the bias caused by sampling-based ranking (although it does not affect the fairness bound given in Theorem 4). Can the authors provide a theoretical analysis of this bias? Alternatively, could the authors change the sampling-based ranking to random sampling of $\\\\boldsymbol{E}^j\\\\boldsymbol{e} _ {u _ b}$ , and test the impact of this bias on the convergence rate of Jensen gap?\", \"**Questions about Experiments.** I have the following concerns about the experiments:\", \"There are only two datasets utilized in the main results (Tables 1 and 2), which is insufficient. The authors might consider adding one or two widely-used datasets, such as Amazon-Electronic, which can be processed using the same method as in Appendix H.\", \"In Section 6.3 \\\"Experimental Analysis\\\", the authors find that the accuracy first increases then decreases as $\\\\lambda$ increases, and attribute the phenomenon to the popularity bias. Then, is it possible to apply popularity debias method to the proposed algorithm, e.g., Inverse Propensity Score (IPS)-based reweighting method?\", \"**Minor Concerns:**\", \"Line 324, $\\\\hat c _{u, i} = -d(\\\\boldsymbol{e} _u, \\\\boldsymbol{e} _i)$, should there be $\\\\hat c _{u, i} = d(\\\\boldsymbol{e} _u, \\\\boldsymbol{e} _i)$ ?\", \"Line 325, the authors should suppose that $\\\\boldsymbol{e}_u$ and $\\\\boldsymbol{e}_i$ are normalized to make sure $d(\\\\boldsymbol{e}_u, \\\\boldsymbol{e}_i) \\\\leq 1$ , which is relied on by the proof of Theorem 4 (cf. Line 1064).\", \"Line 357, \\\"The $L$ items\\u2019 embeddings are denoted as ...\\\", the $L$ should be $Q$ ?\", \"Line 979, the minus in $-\\\\mathcal{I}$ should be placed at the loss term.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the explanation, which addresses my concerns. Due to my limited expertise in this area, I would like to keep my positive rating but with low confidence.\"}",
"{\"summary\": \"This paper observes that the MMF-based optimization introduces a Jensen gap, which will become more pronounced when mini-batch size decreases and group size increases. The authors reformulate MMF into a group-weighted optimization, and solve its dual to minimize Jensen gap. Theoretical analysis reveals that the proposed method achieves a sub-linear convergence rate. Experiments are conducted across three recommendation models on two datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper provides both theoretical analysis and empirical results for motivation derivation and the effectiveness of the proposed method.\\n2.\\tThe experiments are conducted across large-scale recommendation models, which aligns well with the stated motivation.\\n3.\\tThe paper is generally well-written, with a clear structure.\", \"weaknesses\": \"1.\\tThe assumption of convexity is too strong and impractical for large-scale recommendation models.\\n2.\\tWhy can the reported NDCG exceed 1, which is theoretically impossible? Also, please specify the number of items in the truncated list K.\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper, through theoretical and empirical analysis, argues that existing methods with group max-min fairness (MMF) optimization for fairness-aware recommender systems will introduce Jensen gaps during model convergence when applying mini-batch sampling. It theoretically reformulates the MMF constraint objective as a group-weighted optimization objective and proposes a FairDual algorithm to minimize the Jensen gap. The effectiveness of FairDual is verified on two public datasets combined with three skeleton recommendation models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"S1: Sufficient support is provided for motivation through theoretical and empirical analysis.\", \"s2\": \"A new perspective based on group-weighted optimization is provided for MMF, and some corresponding theoretical insights are provided.\", \"s3\": \"Combining different skeleton models on multiple datasets provides various experimental results.\", \"weaknesses\": [\"W1: The quality and clarity of the presentation need to be improved. Including some unclear statements that need to be revised, the organization of figures and tables needs to be revised, and some content coherence needs to be further revised. For example,\", \"For Formula 1, the meaning of the symbol $n_i$ seems missing. Secondly, based on the description in the second paragraph of Section 3, $L_k(u)$ is used to represent the recommendation list, and $K$ identifies the list length. However, in the description in the last paragraph, the statement \\\"following the practice in time-aware RS, there may be users with interactions cu,i greater than the ranking size K, in which case we will only consider the least recent K interactions to represent their recent preferences\\\" is confusing. Why is it emphasized that there are users with more than K interactions? Why must the most recent K interactions be kept instead of other numbers?\", \"Regarding the first paragraph of Section 4, the statement \\u201cIn real-world scenarios, the number of users |U| is often large, and a mini-batch sampling strategy is often necessary due to the large computational costs. This involves dividing the |U| users into |U|/B batches and performing gradient descent methods on each batch\\u201d is also confusing. In my experience, in practice, grouping users into different batches for optimization is not usually adopted, i.e., each batch only contains a subset of users, and all interactions of each user are included. Secondly, the following description seems to use a random batch sampling. The purpose of emphasizing this aspect here is unclear.\", \"Regarding the second paragraph of Section 4.2 and Figure 1, as far as I understand, the convergence of the same group should be examined under different batch sizes to obtain the desired observations and conclusions.\", \"For the last paragraph of Section 5.2.1, the meaning of *P* is missing.\", \"Based on the current description of Section 5.2.2, I can't find any instructions on handling the first batch since it lacks a pre-batch to compute $g$. Secondly, does the operation of sampling $Q$ items bring noise or instability? This requires more discussion and experimental analysis.\", \"The current placement of figures and tables does not facilitate reading and needs to be placed as close to the corresponding description as possible.\", \"I like the first half of this paper. However, I am confused about why fairness must be associated with large recommendation models (especially large language recommendation models) after the methods section. On the one hand, this makes some of the treatments required for large language recommendation models appear abruptly. On the other hand, it is not conducive to evaluating the effectiveness of the proposed solution for fairness in a more general setting.\"], \"w2\": [\"The proposed FairDual lacks some deeper and more valuable insights. For example,\", \"Does it have the same performance or properties for other types of loss functions?\", \"Does it have the same behavior or properties as other fairness optimization constraints?\", \"How does it compare to existing work regarding storage space, computational complexity, and parameter size? Some static or dynamic group weighting methods discussed in related work seem lightweight. Is the additional overhead worthwhile?\", \"If it is not just about fairness at the item group level, does it apply to fairness at the user group level or even in situations where both users and items exist in groups?\"], \"w3\": [\"The current experimental setup and experimental results are not convincing enough.\", \"Representative datasets adopted by many previous fairness recommendation methods should be included more.\", \"Related to the previous concerns, the current version's selection criteria for baselines are confusing and not sufficiently representative. Skeleton models should not be constrained to be recommendation models related to large language models, and more research lines of fair recommendation methods mentioned in related work should be included as baselines, especially those aimed at designing group weighting.\", \"The current description of the implementation details is oversimplified, which is not conducive to reproducibility. Secondly, $\\\\lambda$ is mentioned to range from 0 to 5, but in Figure 3 it is inclusive of 0 to 10.\"], \"questions\": \"Please see the description in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear reviewer,\\n\\nThanks a lot for your hard work and your suggestions. It really helps us improve our paper.\\n\\nThank you for your suggestion regarding re-polishing the paper and placing additional experiments. In the revised version, we will refine the entire paper, include the efficiency experiments in the main text, and move the non-LLM experiments to the Appendix. \\n\\nFor the second concern, \\n(1) Our three datasets, MIND and Amazon, are industrial-scale datasets that can also be utilized for click-through rate prediction tasks [1]. Additionally, our efficiency experiments demonstrate that our methods are suitable for industrial-scale applications.\\n\\n(2) Typically, recommendation systems involve several key tasks, such as rating prediction, click-through rate prediction, and ranking tasks [2]. In this paper, we primarily **focus on fair ranking tasks (Section 3 Formulation), which are crucial and among the most widely used applications [3]**. In the revised version, we place __greater emphasis on the ranking task settings__. Moreover, we want to emphasize that our methods can also be easily extended to other tasks, as mentioned in our previous responses, which we believe will provide significant inspiration to the research community.\", \"for_the_minor_concerns\": \"\", \"q1\": \"Is the added BPR baseline based on the original form of matrix factorization?\", \"a1\": \"Yes, the original BPR model is based on matrix factorization, we will emphasize it in the revised version.\", \"q2\": \"some recommendation models based on large language models are compatible with pairwise losses.\", \"a2\": \"Due to the substantial computational costs of LLMs, recent LLM-based RS are often trained on small subsets of data, whereas traditional models can be trained on full datasets, often achieving comparable performance to LLMs on certain datasets, as noted in [4].\\n\\nIn summary, we want to emphasize that, **in accordance with ICLR policy, the evaluation should be based on the revised version**, and we assure you that your new concerns regarding content organization and clarifications will be appropriately addressed in the camera-ready version (if accepted). __We kindly ask you to consider both our contributions for identifying and mitigating the ignoring Jensen gap for the fairness and RS communities__. We believe our paper can help other researchers explore its applicability to various applications.\\n\\nBest regards,\\n\\nAuthors\\n\\n\\n[1] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep Interest Network for Click-Through Rate Prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery; Data Mining (KDD '18). Association for Computing Machinery, New York, NY, USA, 1059\\u20131068.\\n[2] Ko, H., Lee, S., Park, Y., & Choi, A. (2022). A survey of recommendation systems: recommendation models, techniques, and application fields. _Electronics_, _11_(1), 141.\\n[3] Li, Y., Chen, H., Xu, S., Ge, Y., Tan, J., Liu, S., & Zhang, Y. (2022). Fairness in recommendation: A survey. _arXiv preprint arXiv:2205.13619_.\\n[4] K. Bao, J. Zhang, W. Wang, Y. Zhang, Z. Yang, Y. Luo, F. Feng, X. He, and Q. Tian. A bi-step grounding paradigm for large language models in recommendation systems. arXiv preprint arXiv:2308.08434, 2023a.\"}",
"{\"comment\": \"Thank you for your detailed response. I will keep my positive assessment.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"The proposed FairDual lacks some deeper and more valuable insights\", \"comment\": \"W2.1 Does it have the same performance or properties for other types of loss functions?\\n\\n* A2.1: Thanks for your question. \\n\\nIn recommendation, another widely used loss function is the BPR loss [1], which aims to increase the distance between positive and negative samples. Interestingly, we find that our methods can also be applied to this loss, as BPR loss is a convex function with respect to positive items, and our dual formulation remains valid. \\n\\nWe conduct the experiments in BPR model [1] backbones (Non-LLMs) on MIND dataset with the aforementioned new baselines to show our effectiveness. Note that FairNeg is only designed for the BPR loss, therefore, it is not tested on other backbones.\\n\\n|Models|NDCG@5|MRR@5|MMF@5|NDCG@10|MRR@10|MMF@10|NDCG@20|MRR@20|MMF@20|\\n|-------------------|--------|-------|--------|---------|--------|---------|---------|--------|---------|\\n|Prop|0.42|0.32|0.05|0.57|0.38|0.06|0.95|0.48|10.0|\\n|DRO|0.73|0.62|12.9|0.87|0.72|11.8|1.12|0.79|12.9|\\n|SDRO|0.67|0.61|3.88|0.84|0.68|6.87|1.04|0.73|12.03|\\n|IFairLRS|0.68|0.57|0.13|0.77|0.61|0.23|1.07|0.69|1.38|\\n|**FOCF(new)**|0.4|0.32|0.05|0.57|0.38|0.07|0.95|0.48|10.0|\\n|**Max-min sample(new)**|0.66|0.58|6.54|0.81|0.64|8.8|1.05|0.71|10.87|\\n|**Reg(new)**|0.67|0.61|3.27|0.83|0.67|5.89|1.06|0.73|11.25|\\n|**FairNeg(new)**|0.72|0.63|6.07|0.91|0.71|8.8|1.21|0.79|12.64|\\n|__FairDual(Ours)__|__0.76__|__0.64__|__11.84__|__0.94__|__0.72__|__13.87__|__1.27__|__0.81__|__14.6__|\\n\\nFrom the results, we observe that our method maintains strong performance even with BPR loss functions. We will include these experiments, alongside results from other backbone models, in the main body of the revised paper for comprehensive comparison.\\n\\nW2.2: Does it have the same behavior or properties as other fairness optimization constraints?\\n* A2.2: It is a good question. \\n\\nIn Theorem 1, we demonstrate that our optimization objective is equivalent to the power-family fairness framework, which encompasses mainstream fairness definitions such as Entropy Fairness, $\\\\alpha$-Fairness, and Theil Index [9]. Consequently, our method is __highly adaptable and can be generalized to various fairness objectives within this framework__. \\n\\n\\nWe also test the performances of other fairness metric Gini Index Compared to the baselines (Table 1) on MIND datasets. Note smaller Gini Index means more fairness.\\n|Models|GINI@5|GINI@10|GINI@20|\\n|-------------------|--------|-------|--------|\\n|Prop|0.488|0.488|0.472|\\n|DRO|0.511|0.476|0.487|\\n|SDRO|0.503|0.478|0.453|\\n|IFairLRS|0.458|0.454|0.448|\\n|**FairDual(ours)**|**0.444**|**0.450**|**0.441**|\\n\\nFrom the results, we can observe that our model can still perform good on other fairness metrics.\\n\\nWe believe our paper can help other researchers explore its applicability to various loss functions, and other fairness metrics, which is also our contributions to the communities.\\n\\n\\nW2.3: How does it compare to existing work regarding storage space, computational complexity, and parameter size? Some static or dynamic group weighting methods discussed in related work seem lightweight. Is the additional overhead worthwhile?\\n\\n* A2.3: Thanks for the question. \\n\\nFirstly, we all have parameters of the same magnitude (i.e., group size parameters (hundred level), which are in the range of hundreds and negligible compared to the backbone (million level)). \\nOur method only requires additional space for Q item embeddings and extra training time (Q * 1.5s). Applications can trade off Q based on available resources (as discussed in a previous response).\\n\\nSecondly, as mentioned in Table 3 of the original paper, although there is an additional time overhead per round, __our convergence speed accelerates by 30% compared to the best baseline__. This 30% improvement in convergence speed is highly significant for industrial applications, along with enhanced performance.\\n\\nW3.3\\uff1aIf it is not just about fairness at the item group level, does it apply to fairness at the user group level or even in situations where both users and items exist in groups?\\n* A3.3: Thank you for your question. \\n\\nIndeed, our method can be easily generalized to the user group level by replacing the adjacency matrix with a user-side equivalent while keeping the rest unchanged. \\n\\nFor the two-sided form, it simply requires introducing two coefficients, $\\\\lambda_1$ and $\\\\lambda_2$, and applying two independent dual gradient descent updates as described in our algorithm. We will include a detailed discussion on this in the revised version.\\n\\n\\n[9] Lan, T., & Chiang, M. (2011). An axiomatic theory of fairness in resource allocation. _George Washington University, http://www. seas. gwu. edu/tlan/papers/fairness. pdf, Tech. Rep_.\"}",
"{\"comment\": [\"I am grateful to the authors for providing a detailed response that addresses my concerns.\", \"If the evaluation is based on the revised version, my three concerns are:\", \"Re-polishing the entire paper was necessary because there were numerous revised sections. For example, the description in the last paragraph of the first section was not updated and was inconsistent with the abstract.\", \"To avoid the abruptness of the content and too many necessary experimental results being placed outside the main text, I suggest that the authors focus their perspective and writing on a specific type of recommendation scenario, such as group fairness issues around recommendations based on large language models or sequence recommendations.\", \"To verify the authors\\u2019 description of the advantages of FairDual on an industrial scale, it may be necessary to conduct experiments on some corresponding datasets, such as some datasets provided by the industry commonly used in click-through rate prediction tasks.\"], \"other_minor_issues\": \"1) Results from previous lightweight work on group fairness should be included in the efficiency experiments; 2) Is the added BPR baseline based on the original form of matrix factorization? This may be ambiguous since it is a loss function compatible with many models. 3) some recommendation models based on large language models are compatible with pairwise losses.\\n\\nIf the evaluation is based on the original submission status, the revisions that need to be included may be far beyond what is acceptable for a normal camera-ready version and numerous experimental results may lack a second review.\\n\\nConsidering the above comments, I will increase my score appropriately, but still do not think this submission is at an unquestionably acceptable level in this venue.\"}",
"{\"title\": \"The current experimental setup and experimental results are not convincing enough Part1\", \"comment\": \"W3.1: The current description of the implementation details is oversimplified, which is not conducive to reproducibility. Secondly, $\\\\lambda$ is mentioned to range from 0 to 5, but in Figure 3 it is inclusive of 0 to 10.\\n* A3.1: Sorry for the confusion. \\n\\nFor the first question, we will add the following implementation details in the Appendix of revised version to help readers to reproduce our results: \\n\\n(1) For the environment, our experiments were implemented using Python 3.9 and PyTorch 2.0.1+cu117. All experiments were conducted on a server with an NVIDIA A5000 running Ubuntu 18.04.\\n\\n(2) The pre-processing steps are detailed in Appendix H. For the missing hyper-parameters, we tune sample number $Q\\\\in [50, 400]$ (results show in the previous responses), historical length $H\\\\in [3,7]$ (results show in Table 4), freeze parameter updating gap $\\\\beta\\\\in[128, 3840]$. \\n\\n(3) To mitigate the impact of randomness, we set the temperature coefficient to 0.2 for the LLM and ran each model three times, taking the average of the results. Other LLMs settings are: the penalty for frequency is 0.0, and the penalty for presence is 0.0, the maximum generated token number to 1024.\\n\\n(4) For the Non-LLMs-RS backbones, we mainly reference the RecBole toolkit (https://github.com/RUCAIBox/RecBole). For the LLMs tuning, we reference the BigRec pipelines (https://github.com/SAI990323/BIGRec). __And we have also included our code in the supplementary materials to ensure reproducibility__.\\n\\nFor the second question, we are sorry for the typo, the $\\\\lambda$ should be tuned among [0,10]. We will revise it accordingly.\\n\\nW3.2 Representative datasets adopted by many previous fairness recommendation methods should be included more. Baselines are confusing and not sufficiently representative. \\n\\n* A3.2 Thanks for the question, as mentioned in summary, we __add a representative dataset Amazon-Electronic [4] tested on all baselines and our method__ to the effectiveness of our methods and involve __four baselines__ covered from __optimizing-based__ methods, and __group fair-aware__ recommendation methods: : Max-min sample[5], FOCF [6], Reg[7] and FairNeg [8]. Note FOCF, Reg is designed for the non-LLMs RS models and FairNeg is designed for the pair-wise RS models.\\n\\nFor new dataset Amazon-Electronic, we test on the most advanced model BigRec, and the following is the results:\\n| Model| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|---------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| UNI| 4.61| 4.3| 0.26| 4.93| 4.43| 0.25| 5.3 | 4.53| 0.21|\\n| DRO| 4.65| 4.34| 0.24| 4.96| 4.46| 0.24| 5.33| 4.57| 0.21|\\n| Prop | 4.63| 4.33| 0.26| 4.96| 4.47| 0.25| 5.33| 4.57| 0.21|\\n| SDRO | 4.6 | 4.29| 0.25| 4.92| 4.42| 0.24| 5.29| 4.52| 0.2|\\n| IFairLRS| 2.21| 2.06| 0.19| 2.46| 2.16| 0.17| 2.69| 2.22| 0.12|\\n| **Maxmin Sample (new)**| 4.6| 4.31| 0.27| 4.92| 4.44| 0.25| 5.31| 4.55| 0.21|\\n| **FairDual(Ours)** | __5.08__| __4.78__| __0.31__ | __5.43__| __4.92__| __0.3__ | __5.84__| __5.03__| __0.26__|\\n\\nFor other two dataset, the following is the new baseline results:\\n\\nFor MIND dataset\\n| Model| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|---------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| **Maxmin Sample (new)**| 0.98| 0.75| 2.25| 1.49| 0.96| 1.71| 2.19 | 1.15| 3.13|\\n| **FairDual(Ours)**| __1.15__| __0.88__| __2.82__| __1.69__| __1.11__| __2.99__| __2.28__| __1.27__| __3.39__|\\n\\nFor Amazon-book dataset\\n| Model| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|---------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| **Maxmin Sample (new)**| 2.49| 2.31| 6.80| 2.72| 2.43| 6.80| 2.97 | 2.74| 7.5|\\n| **FairDual(Ours)**| __3.11__| __2.88__| __8.90__| __3.31__| __2.96__| __9.00__| __3.60__| __3.04__| __8.89__|\\n\\nFrom the results, we can also see that our model can outperform all the baselines, showing our effectiveness. We will add the results in the main body of the revised paper.\\n\\n__For other baselines FOCF, Reg and FairNeg, we compare them in the following non-LLM backbone model experiments.__\"}",
"{\"title\": \"Summary of Rebuttal\", \"comment\": \"Thank you for sharing your detailed and insightful questions and suggestions, they really help us to improve our paper.\\n\\nFirstly, we will summarize our changes in the rebuttal as follows:\\n\\n(1) We __recheck and review__ our quality and clarity of the __presentation__ according to your questions.\\n\\n(2) For the deeper and more valuable insights, we conduct the following discussion and rebuttal\\n\\n* We involve more discussion about our method is highly adaptable and can be generalized to various __group fairness form__.\\n* We conduct the experiments on __other loss functions and fairness metrics__. The experiments also verify the effectiveness of our methods.\\n\\n(3) For the experimental setup and results:\\n\\n* We add three most widely used __non-LLMs-based recommendation backbones__ (BPR [1], GRU4Rec [2], SASRec [3]) on MIND dataset to show our methods are effective in a general setting.\\n\\n\\n* We also __add a representative dataset Amazon-Electronic [4] tested on all baselines and our method__ to the effectiveness of our methods. \\n\\n* We involve __four baselines__ covered from __optimizing-based__ methods, and __group fair-aware__ recommendation methods. Note that group fair-aware recommendation models cannot well adapted into LLMs-based recommender models since they are often developed under pair-wise form of recommendation [1]. \\n\\n__Optimizing-based__ baselines: \\n[5] Max-min sample: applies optimizing techniques to dynamically sample groups.\\n[6] FOCF: applies a fair-aware regularization loss of different groups into non-LLMs RS.\\n\\n __Group fair-aware__ methods in recommendation:\\n[7] Reg: Penalizes the squared diference between the average scores of two groups for all positive user-item pairs into non-LLMs RS.\\n[8] FairNeg: A negative sampling way for pair-wise recommendation into non-LLMs RS.\\n\\n* We conduct more analysis about the __sample number $Q$, storage space, complexity, and parameter size__.\\n\\n\\nSecondly, we want to emphasize that our paper primarily addresses the significant yet often overlooked bias (Jensen gap) that arises when optimizing fairness objectives in recommendation systems. We thoroughly analyze the reasons behind this bias and propose FairDual, a well-generalized and efficient algorithm to bridge the Jensen gap. Our approach is validated across three diverse datasets and six state-of-the-art backbones, compared against various types of baselines. We believe our paper can help other researchers explore its applicability to various loss functions, objectives, and fairness concepts.\\n\\n\\nIn summary, we kindly ask you to consider __both our theoretical contributions and real-industrial applications__ for the fairness communities. In the following responses, we will address your question __one by one__ to ensure clarity and thoroughness. \\n\\n[1] Steffen Rendle et al. \\\"BPR: Bayesian Personalized Ranking from Implicit Feedback.\\\" in UAI 2009.\\n[2] Yong Kiam Tan et al. \\\"Improved Recurrent Neural Networks for Session-based Recommendations.\\\" in DLRS 2016.\\n[3] Wang-Cheng Kang et al. \\\"Self-Attentive Sequential Recommendation.\\\" in ICDM 2018.\\n[4] R. He and J. McAuley. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering.\\n[5] J. D. Abernethy, P. Awasthi, M. Kleindessner, J. Morgenstern, C. Russell, and J. Zhang. Active sampling for min-max fairness in ICML 2022.\\n[6] Yao sirui et al. Beyond parity: Fairness objectives for collaborative filtering in Neurips 2017.\\n[7] Toshihiro Kamishima and Shotaro Akaho. 2017. Considerations on recommendation independence for a find-good-items task.\\n[8] Chen, Xiao et al. Fairly Adaptive Negative Sampling for Recommendations in WWW 2023.\"}",
"{\"metareview\": \"This paper studies the group max-min fariness (MMF) constrained optimisation probem in recommendation. The authors theoretically show the MMF-constrained objective can be reformulated as a group-weighted optimization objective. Then, they propose a dual optimization method, named FairDual,to minimise the Jensen gap. Some theoretical analysis of the proposed method have been performed, and extensive experiments have been performed on public datasets to demonstrate the effectiveness of the proposed method.\\n\\nOverall, this paper is well motivated, and the proposed method is novel. The proposed method is solid with a guaranteed bound for Jesen gap, and the experiment also showed that the proposed method indeed has lower gap than other baselines. Moreover, the authors also conduct comprehensive evaluations, including the effectiveness, Jensen gap analysis, case study, training efficiency, and parameter analysis.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal, the authors modify the presentation of the paper according to the review comments, provide more experimental results on new datasets, additional baselines, and updated backbones. Moreover, they also perform analysis on the sample size, computational cost, fairness metric, and popularity bias. The authors have well addressed the reviewer's concerns regarding with the theoretical analysis, sampling-related experimental analysis, the assumption of convexity, the value of NDCG, the baselines, and the statistical significance of MRR.\"}",
"{\"title\": \"Baselines\", \"comment\": \"W1: I suggest the authors to add state-of-the-art baselines in optimization-based methods mentioned in the paper.\\n* A1: \\nWe involve __four baselines__ covered from __optimizing-based__ methods, and __group fair-aware__ recommendation methods. Note FOCF, Reg is designed for the non-LLMs RS models and FairNeg is designed for the pair-wise RS models. We conduct experiments on different LLMs and non-LLMs backbones on MIND dataset.\\n\\n[1] Max-min sample: applies optimizing techniques to dynamically sample groups.\\n[2] FOCF: applies a fair-aware regularization loss of different groups into non-LLMs RS.\\n[3] Reg: Penalizes the squared difference between the average scores of two groups for all positive user-item pairs into non-LLMs RS.\\n[4] FairNeg: A negative sampling way for pair-wise recommendation into non-LLMs RS.\\n\\n\\n\\nFor MIND dataset\\n| Model| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|---------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| **Maxmin Sample (new)**| 0.98| 0.75| 2.25| 1.49| 4.43| 0.96| 2.19 | 1.15| 3.13|\\n| **FairDual(Ours)**| __1.15__| __0.88__| __2.82__| __1.69__| __1.11__| __2.99__| __2.28__| __1.27__| __3.39__|\\n\\nFor Amazon-book dataset\\n| Model| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|---------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| **Maxmin Sample (new)**| 2.49| 2.31| 6.80| 2.72| 4.43| 6.80| 2.97 | 2.74| 7.5|\\n| **FairDual(Ours)**| __3.11__| __2.88__| __8.90__| __3.31__| __2.96__| __9.00__| __3.60__| __3.04__| __8.89__|\\n\\nBPR[5] backbones:\\n\\n|Models|NDCG@5|MRR@5|MMF@5|NDCG@10|MRR@10|MMF@10|NDCG@20|MRR@20|MMF@20|\\n|-------------------|--------|-------|--------|---------|--------|---------|---------|--------|---------|\\n|Prop|0.42|0.32|0.05|0.57|0.38|0.06|0.95|0.48|10.0|\\n|DRO|0.73|0.62|12.9|0.87|0.72|11.8|1.12|0.79|12.9|\\n|SDRO|0.67|0.61|3.88|0.84|0.68|6.87|1.04|0.73|12.03|\\n|IFairLRS|0.68|0.57|0.13|0.77|0.61|0.23|1.07|0.69|1.38|\\n|**FOCF(new)**|0.4|0.32|0.05|0.57|0.38|0.07|0.95|0.48|10.0|\\n|**Max-min sample(new)**|0.66|0.58|6.54|0.81|0.64|8.8|1.05|0.71|10.87|\\n|**Reg(new)**|0.67|0.61|3.27|0.83|0.67|5.89|1.06|0.73|11.25|\\n|**FairNeg(new)**|0.72|0.63|6.07|0.91|0.71|8.8|1.21|0.79|12.64|\\n|__FairDual(Ours)__|__0.76__|__0.64__|__11.84__|__0.94__|__0.72__|__13.87__|__1.27__|__0.81__|__14.6__|\\n\\nThe results of GRU4Rec [6]:\\n| Models| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|-------------------|--------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| UNI| 0.39| 0.36| 5.08| 0.55| 0.42| 6.44| 0.83| 0.5| 9.08|\\n| Prop| 0.42| 0.35| 7.94| 0.63| 0.44| 10.19| 0.9| 0.51| 13.1|\\n| DRO| 0.56| 0.56| 0.86| 0.76| 0.64| 5.56| 1.13| 0.71| 10.7|\\n| SDRO| 0.45| 0.36| 11.42 | 0.67| 0.44| 12.05| 0.97| 0.53| 13.15|\\n| IFairLRS| 0.45| 0.38 | 7.12 | 0.68| 0.47 | 9.21 | 1.02| 0.56 | 11.70 |\\n| **FOCF (new)**| 0.56| 0.41| 5.62| 0.79| 0.63| 7.11| 1.1| 0.7| 10.29|\\n| **Maxmin sample(new)**| 0.43| 0.33| 10.9| 0.62| 0.41| 14.27| 0.91| 0.48| 13.06|\\n| **Reg(new)**| 0.45| 0.37| 6.93| 0.67| 0.46| 8.6| 1.02| 0.55| 10.92|\\n| **FairDual (Ours)**| __0.59__| __0.47__| __12.13__ | __0.85__| __0.68__| __12.77__| __1.16__| __0.76__| __14.09__|\\n\\nFor the results of SASRec [7]:\\n| Models| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|--------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| UNI| 0.59| 0.5| 10.43 | 0.76| 0.56| 11.91| 1.09| 0.65| 12.94|\\n| Prop| 0.54| 0.45| 11.69 | 0.8| 0.55| 12.1| 1.16| 0.57| 13.01|\\n| DRO| 0.54| 0.4| 8.07| 0.72| 0.47| 11.34| 1.11| 0.57| 12.26|\\n| SDRO| 0.49| 0.4| 10.66 | 0.74| 0.49| 11.64| 1.09| 0.59| 14.02|\\n| IFairLRS| 0.58| 0.57| __12.63__ | 0.60| 0.58| 12.35| 0.62| 0.58| 13.73|\\n| **FOCF(new)**| 0.47| 0.46| 10.52 | 0.5| 0.47| 12.73| 0.53| 0.48| 14.46|\\n| **Minmax_SGD(new)**| 0.56| 0.47| 9.05| 0.74| 0.54| 12.45| 1.09| 0.64| 14.06|\\n| **Reg(new)**| 0.47| 0.38| 9.42| 0.7| 0.47| 9.52| 1.03| 0.55| 10.91|\\n| **FairDual (Ours)**| __0.64__| __0.63__| 11.98 | __0.78__| __0.64__| __13.08__| __1.31__| __0.67__| __14.51__|\\n\\nFrom the results, we can also see that our model can outperform all the baselines except MMF@5 in SASRec, showing our effectiveness on non-LLMs RS models. \\n\\n[1] J. D. Abernethy, et al.. Active sampling for min-max fairness.\\n[2] Yao sirui et al. Beyond parity: Fairness objectives for collaborative filtering.\\n[3] Toshihiro Kamishima and Shotaro Akaho. 2017. Considerations on recommendation independence for a find-good-items task.\\n[4] Chen, Xiao, et al. Fairly Adaptive Negative Sampling for Recommendations.\\n[5] Steffen Rendle et al. \\\"BPR: Bayesian Personalized Ranking from Implicit Feedback.\\\".\\n[6] Yong Kiam Tan et al. \\\"Improved Recurrent Neural Networks for Session-based Recommendations.\\\".\\n[7] Wang-Cheng Kang et al. \\\"Self-Attentive Sequential Recommendation.\\\".\"}",
"{\"summary\": \"In this paper, the authors address the challenge of integrating a max-min fairness constraint, which introduces a Jensen gap between the model\\u2019s convergence point and the optimal point. They first demonstrate that using mini-batch sampling optimization strategies leads to a Jensen gap that increases as the mini-batch size decreases. To bridge this gap, the authors propose an algorithm that reformulates the original optimization problem through a re-weighting approach, leveraging dual-optimization techniques to update the weights of each group. They theoretically prove that their approach achieves a sublinear convergence rate and numerically demonstrate its effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and provides interesting insights.\", \"The authors well motivated the problem in Section 4 where they established the existence of Jensen gap.\", \"The theoretical results appear sound. The authors also provided extensive numerical evaluations of their method.\"], \"weaknesses\": [\"It'd be helpful if the authors can provide more interpretations for Theorem 4, the main theoretical result and, in particular, comment on their technical contributions. What is the main technical novelty in attaining this theoretical result? A proof sketch could also be helpful.\", \"Following the point above, in the numerical experiments, there appears to be some non-monotonic variation of the Jensen gap w.r.t. the batch size. I wonder if the authors can comment on why this is the case. Is this consistent with the theoretical results?\"], \"questions\": [\"Could the results extend to alternative fairness constraints beyond max-min fairness?\", \"What is the computational complexity of the proposed algorithm and how does it compare with the other baselines?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Revised pdf and summary\", \"comment\": \"Dear reviewers:\\n\\n Thanks for your hard work, your suggestions really help us to improve our paper. We revised our paper according to your suggestions (**revised parts are marked as blue**) and **re-upload our modified pdf**.\", \"we_will_summarize_our_changes_as_follows\": [\"(1) For the main body:\", \"We __recheck and review__ our quality and clarity of the __presentation__ according to your questions.\", \"We have incorporated experimental results on __new datasets, additional baselines (including different loss functions), and updated backbones__, as outlined in our rebuttal.\", \"We add more intuitive descriptions to explain our weight.\", \"(2) For the Appendix:\", \"We re-polish our proof organization to help readers to understand.\", \"We involve more discussion about how our method is highly adaptable and can be generalized to various __group fairness forms, and different fairness constraints__.\", \"We add more detailed implementation details, baseline, and backbone details to enhance reproducibility.\", \"We conduct the analysis and experiments on the effect of **sample size Q, computational costs, other fairness metric,s and popularity bias**.\", \"Finally, we want to emphasize that our paper primarily addresses the significant yet often overlooked bias (Jensen gap) that arises when optimizing fairness objectives in recommendation systems. We thoroughly analyze the reasons behind this bias and propose FairDual, a well-generalized and efficient algorithm to bridge the Jensen gap. We kindly ask you to consider __both our theoretical contributions and real-industrial applications__ for the fairness communities.\", \"If you have any questions, please be free to ask them before the deadline (Nov. 26), we will answer them as soon as possible.\", \"Best,\", \"Authors\"]}",
"{\"title\": \"Rebuttal deadline is approching\", \"comment\": \"Dear Reviewers,\\n\\nThanks for your hard work reviewing our paper and for your suggestions. As the discussion deadline will end in less than two days, however, only two reviewers responded to our rebuttal. \\n\\nWe would like to know whether our responses have adequately addressed your concerns. Please do not hesitate to reach out if you have any further questions or need clarification before the deadline. We greatly appreciate your dedicated time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"W1: The assumption of convexity is too strong and impractical for large-scale recommendation models.\\n\\n* A1: Sorry for the confusion. We do not assume that all functions are convex; rather, __we only assume that the optimization objective loss function is convex, which allows it to be transformed into its dual form__. Common recommendation objectives, including BCE loss, Entropy loss, and BPR loss, all exhibit convexity. We will clarify and re-state this assumption regarding convexity.\", \"w2\": \"Why can the reported NDCG exceed 1, which is theoretically impossible? Also, please specify the number of items in the truncated list K.\\n\\n* A2: Sorry for the confusion. \\n\\nFor the first question, in caption (line 400) of Table 1 and Table 2, we state that __all the numbers with \\u201c%\\u201d omitted__, which means 1.15 in the Table means 1.15%. Since we evaluate the performance with full ranking (after filtering, for MIND, we rank nearly 1000 items and for Amazon dataset, we rank nearly 4000 items), which will make our ndcg and mrr be this kinds of magnitude [1].\\n\\nFor the second question, The truncated $K$ number in the table denotes the top-$K$, for example, top5 means $K=5$. We will modify the table $top5$ into $K=5$. Thanks for your suggestions.\\n\\n[1] K. Bao, J. Zhang, W. Wang, Y. Zhang, Z. Yang, Y. Luo, F. Feng, X. He, and Q. Tian. A bi-step grounding paradigm for large language models in recommendation systems. arXiv preprint arXiv:2308.08434, 2023a.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks authors for their comprehensive responses. I believe after solving my questions, the paper can be better. Thus I would like to rise my score.\"}",
"{\"title\": \"Response to Author's Rebuttal\", \"comment\": [\"Thank you for your detailed response, which addressed most of my concerns:\", \"I now fully understand the derivation in A1; this proof is clear and requires no modifications. I apologize for my earlier misunderstanding. From my perspective, the theoretical part of this paper should have no major issues.\", \"I like the explanation of the shadow price in A3; this encapsulates the essence of using dual optimization to address this problem. I suggest that the authors highlight this explanation in their paper.\", \"I thank the authors for the sampling-related experimental analysis provided in A4. From the results, it is clear that sampling introduces some gap, but addressing this issue exceeds the scope of this paper.\", \"The authors' experimental results are very timely, and their findings on Amazon-Electronic align with my experience. The experiments involving IPS are also intriguing, as IPS is a standard technique for recommendation debiasing, and it is important to study the trade-off between debiasing and fairness. I thank the authors for their efforts and strongly recommend incorporating these insightful experiments into the discussion of the experimental results.\", \"My final suggestion is that the authors should not label metrics such as NDCG, MRR, and MMF with percentage signs (%)! This is a very confusing practice that has led to misunderstandings among reviewers, including myself and Reviewer `w1tq`.\", \"Considering the above comments, I have increased my score.\"]}",
"{\"title\": \"Questions about Theoretical Analysis and Algorithm\", \"comment\": \"Thank you for sharing your detailed and insightful questions and suggestions, they really help us to improve our paper.\", \"q1\": \"In the proof of Theorem 1 (Appendix A), why is Problem (15) equivalent to Problem (2). Providing an explicit solution for $t$ and $b$ is important for Theorems 1 and 2.\\n* A1: Thanks for the question. \\n\\nFollowing the same definition $g(x;k;s)=(k^{\\\\top}x^{1+s})^{\\\\frac{1}{1+s}}$, in Problem (15), we have the simplified form ${\\\\gamma}^{\\\\top}x + \\\\lambda\\\\min({\\\\gamma}^{\\\\top}x)=g(x;{\\\\gamma};0) + \\\\lambda g(x;{\\\\gamma};-\\\\infty)$. \\n\\nIn such a way, $b=\\\\gamma^{\\\\frac{1}{1+s}}$. \\n\\nHowever, the specific value of $t$ is an __implicit function and cannot be solved explicitly in closed form__. This is because according to the fact that the function $g$ is continuous with respect to $s$ over its entire domain and based on the intermediate value theorem for continuous functions, there must exist a $t$ such that the linear combination of the linear functions at the two endpoints equals.\\n\\nNonetheless, we emphasize that the __subsequent methods and proof strategies are independent of the explicit solution for t__. As long as there exists a $t\\\\neq0$, the Jansen gap exists, and as $\\\\lambda$ increases, $t$ will also increase. We will include the discussion into the Theorem statements, meanwhile, we ensure our proof conclusion remain the valid.\", \"q2\": \"The organization of Lemma 2, Lemma 3, and Theorem 3 (Appendix D-F) is somewhat disorganized. Place Lemma 3 before Lemma 2 and rewrite the proof of Lemma 2 to explain why the conclusion can be derived.\\n\\n* A2: Thanks for the suggestions. \\n\\nFor the first question, we will place Lemma 3 before Lemma 2. \\n\\nFor the second question, we will re-write the beginning of the Lemma 2 as: after we get the conclusion from Lemma 3, we will have $r(\\\\mu)<\\\\infty$. Then we will proof for any $b\\\\in M, c>0$, we have $r(\\\\mu+cb)<\\\\infty$. The rest of the parts remain the same.\", \"q3\": \"The authors should provide some intuitive explanations for the weight sg to better elucidate the experimental phenomena (Case study in Section 6.3). For instance, under what circumstances is $s_g$ larger, and when is it smaller?\\n* A3: Thanks for the suggestions. \\n\\nIntutively, $s_g=1-\\\\mu_g$ is the negative showadow prices. In Lines 298-303, the high shadow price $\\\\mu_g$ indicates that this constraint is the primary factor constraining accuracy optimization. Conversely, a low or zero shadow price suggests that the fairness constraint currently imposes little restriction on accuracy. \\n\\nWe will revise the description to clarify the meaning of $s_g$\\u200b, indicating that a high $s_g$\\u200b signifies that this constraint is the primary factor limiting fairness optimization for group $g$, whereas a low or zero $s_g$\\u200b implies that the accuracy constraint for group $g$ currently has little impact on the overall optimization. Additionally, we will revisit this concept in the case study presented in Section 6.3.\", \"q4\": \"Can the authors provide a theoretical analysis of this bias? Alternatively, could the authors change the sampling-based ranking to random sampling, and test the impact of this bias on the convergence rate of Jensen gap?\\n* A4: Intuitively, a larger Q provides a more accurate gradient estimation but also incurs higher computational costs. We have conducted experiments to evaluate the impact of Q and will present the results. The results were conducted under the same settings of analysis section.\\n\\n\\n\\n|Q| 50 | 100 | 200 | 300 | 400 | full (unbiased) |\\n|--|--|--|--|--|--|--|\\n| NDCG (%)| 1.08 | 1.08 | 1.15 | 1.19 | 1.19 | 1.29|\\n| MMF (%)| 1.2 | 1.28 |2.18 |2.10 |2.29 |2.31|\\n\\nFrom the results, we observe that increasing the sample value Q leads to improvements in both accuracy and fairness performance. However, in LLM-based recommender systems, a larger Q significantly increases training time (with __each item requiring an additional 1.5 seconds__) and storage space. Different applications should select appropriate Q values based on their specific accuracy, fairness requirements, and computational constraints.\\nWe will include these experimental results and discussion in the Appendix.\"}",
"{\"comment\": \"Thanks for your hard work. Your valuable suggestions really help us to improve our paper!\\n\\nBest,\\nAuthors\"}",
"{\"title\": \"Deadline is approching\", \"comment\": \"Dear reviewer REQg,\\n\\nAs the discussion deadline will end in less than three days, we would like to know whether our responses have adequately addressed your concerns. Please do not hesitate to reach out if you have any further questions or need clarification before the deadline. We greatly appreciate your dedicated time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Questions about Experiments and Minor concerns\", \"comment\": \"Q5: There are only two datasets utilized in the main results (Tables 1 and 2), which is insufficient. The authors might consider adding one or two widely-used datasets, such as Amazon-Electronic.\\n\\n* A5: Thanks for your suggestions.\\n\\nwe __add a representative dataset Amazon-Electronic tested on all baselines and our method__ to the effectiveness of our methods and involve __new baseline__ Max-min sample[1], FOCF. For new dataset Amazon-Electronic, we test on the most advanced model BigRec, and the following is the results:\\n| Model| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|---------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| UNI| 4.61| 4.3| 0.26| 4.93| 4.43| 0.25| 5.3 | 4.53| 0.21|\\n| DRO| 4.65| 4.34| 0.24| 4.96| 4.46| 0.24| 5.33| 4.57| 0.21|\\n| Prop | 4.63| 4.33| 0.26| 4.96| 4.47| 0.25| 5.33| 4.57| 0.21|\\n| SDRO | 4.6 | 4.29| 0.25| 4.92| 4.42| 0.24| 5.29| 4.52| 0.2|\\n| IFairLRS| 2.21| 2.06| 0.19| 2.46| 2.16| 0.17| 2.69| 2.22| 0.12|\\n| **Maxmin Sample (new)**| 4.6| 4.31| 0.27| 4.92| 4.44| 0.25| 5.31| 4.55| 0.21|\\n| **FairDual(Ours)** | __5.08__| __4.78__| __0.31__ | __5.43__| __4.92__| __0.3__ | __5.84__| __5.03__| __0.26__|\\n\\nFrom the results, we can see our methods can still outperform all the baselines, indicating the effectiveness of our methods.\\n\\n[1] J. D. Abernethy, P. Awasthi, M. Kleindessner, J. Morgenstern, C. Russell, and J. Zhang. Active sampling for min-max fairness in ICML 2022.\", \"q6\": \"Line 324 $c_{u,i}=\\u2212d(e_u,e_i)$, should there be $d(e_u,e_i)$?\\n* A6: Sorry for the confusion. In our definition, $d(x,y)=-x^{\\\\top}y$ is the distance of x and y (negative similarities) and in recommendation, the more small distance will mean that x and y have more similarities, therefore $c_{u,i}=\\u2212d(e_u,e_i)=e_u^{\\\\top}e_i$. We will consider to directly write $c_{u,i}=e_u^{\\\\top}e_i$ by removing the $d()$ function.\", \"q7\": \"Line 325, the authors should suppose that $e_u$ and $e_i$ are normalized.\\n* A7: Thanks for the suggestion, we will modify it accordingly.\", \"q8\": \"Line 357, the L should be Q?\\n* A8: Yes! sorry for the typo, we will modify it accordingly.\", \"q9\": \"Line 979, the minus in \\u2212I should be placed at the loss term.\\n* A9: Thanks for the suggestion, we will modify it accordingly.\\n\\n\\n[2] Wang-Cheng Kang et al. \\\"Self-Attentive Sequential Recommendation.\\\" in ICDM 2018.\"}",
"{\"title\": \"Experiment Part2\", \"comment\": \"W3.3 Recommendation models related to large language models, and more research lines of fair recommendation methods mentioned in related work should be included as baselines.\\n* A3.3: Thank you for your suggestions. You are correct that our backbone models do not necessarily need to be LLMs; we use LLMs in this case because the small batch size poses greater challenges for LLM-based recommender models in addressing the Jansen gaps. \\n\\nAs mentioned in previous responses, we add three most widely used __non-LLMs-based recommendation backbones__ (BPR [1], GRU4Rec [2], SASRec [3]) on MIND dataset to show our methods are effective in a general setting. We already report the results on BPR in previous responses.\", \"the_results_of_gru4rec\": \"| Models| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|-------------------|--------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| UNI| 0.39| 0.36| 5.08| 0.55| 0.42| 6.44| 0.83| 0.5| 9.08|\\n| Prop| 0.42| 0.35| 7.94| 0.63| 0.44| 10.19| 0.9| 0.51| 13.1|\\n| DRO| 0.56| 0.56| 0.86| 0.76| 0.64| 5.56| 1.13| 0.71| 10.7|\\n| SDRO| 0.45| 0.36| 11.42 | 0.67| 0.44| 12.05| 0.97| 0.53| 13.15|\\n| IFairLRS| 0.45| 0.38 | 7.12 | 0.68| 0.47 | 9.21 | 1.02| 0.56 | 11.70 |\\n| **FOCF (new)**| 0.56| 0.41| 5.62| 0.79| 0.63| 7.11| 1.1| 0.7| 10.29|\\n| **Maxmin sample(new)**| 0.43| 0.33| 10.9| 0.62| 0.41| 14.27| 0.91| 0.48| 13.06|\\n| **Reg(new)**| 0.45| 0.37| 6.93| 0.67| 0.46| 8.6| 1.02| 0.55| 10.92|\\n| **FairDual (Ours)**| __0.59__| __0.47__| __12.13__ | __0.85__| __0.68__| __12.77__| __1.16__| __0.76__| __14.09__|\", \"for_the_results_of_sasrec\": \"| Models| NDCG@5 | MRR@5 | MMF@5 | NDCG@10 | MRR@10 | MMF@10 | NDCG@20 | MRR@20 | MMF@20 |\\n|------------------|--------|-------|-------|---------|--------|--------|---------|--------|--------|\\n| UNI| 0.59| 0.5| 10.43 | 0.76| 0.56| 11.91| 1.09| 0.65| 12.94|\\n| Prop| 0.54| 0.45| 11.69 | 0.8| 0.55| 12.1| 1.16| 0.57| 13.01|\\n| DRO| 0.54| 0.4| 8.07| 0.72| 0.47| 11.34| 1.11| 0.57| 12.26|\\n| SDRO| 0.49| 0.4| 10.66 | 0.74| 0.49| 11.64| 1.09| 0.59| 14.02|\\n| IFairLRS| 0.58| 0.57| __12.63__ | 0.60| 0.58| 12.35| 0.62| 0.58| 13.73|\\n| **FOCF(new)**| 0.47| 0.46| 10.52 | 0.5| 0.47| 12.73| 0.53| 0.48| 14.46|\\n| **Minmax_SGD(new)**| 0.56| 0.47| 9.05| 0.74| 0.54| 12.45| 1.09| 0.64| 14.06|\\n| **Reg(new)**| 0.47| 0.38| 9.42| 0.7| 0.47| 9.52| 1.03| 0.55| 10.91|\\n| **FairDual (Ours)**| __0.64__| __0.63__| 11.98 | __0.78__| __0.64__| __13.08__| __1.31__| __0.67__| __14.51__|\\n\\n\\nFrom the results, we can also see that our model can outperform all the baselines except MMF@5 in SASRec, showing our effectiveness on non-LLMs RS models. We will add the results in the main body of the revised paper.\\n\\nThanks again for your question and suggestions.\"}"
]
} |
1P6AqR6xkF | ACID: A Comprehensive Dataset for AI-Created Image Detection | [
"Haoming Lu",
"Kai Wang",
"Bin Sun",
"Hovhannes Margaryan",
"Xingqian Xu",
"Humphrey Shi"
] | Generative models have demonstrated remarkable capabilities in generating photorealistic images under proper conditional guidance. Such advancements raise concerns about potential negative social impacts, such as the proliferation of fake news. In response, numerous methods have been developed to differentiate fake from real. Yet, their accuracy and reliability still need to be improved, especially when facing state-of-the-art generative models such as large diffusion models. Infrastructure-wise, the existing testing datasets are sub-optimal in terms of research dimensions and product utility due to their limited data volume and insufficient domain diversity.
In this work, we introduce a comprehensive new dataset, namely ACID, which consists of 13M samples sourced from over 50 different generative models versus real-world scenarios. The AI-generated images in this collection are sampled based on fine-grained text prompts and span multiple resolutions. For the real-world samples, we broadly searched public data sources and carefully filtered text-image pairs based on visual and caption quality.
Using ACID, we present ACIDNet, an effective framework for detecting AI-generated images. ACIDNet leverages texture features from a Single Simple Patch (SSP) branch and semantic features from a ResNeXt50 branch, and achieves overall cross-benchmark accuracy of $86.77\%$, significantly outperforming previous methods such as SSP and CNNSpot by over $10\%$. Both our model and dataset will be open-released to the public. | [
"Computer vision",
"Generative Model",
"AI Ethics"
] | https://openreview.net/pdf?id=1P6AqR6xkF | https://openreview.net/forum?id=1P6AqR6xkF | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"r4HZ8cR6Gn",
"qX1PuhBiVq",
"ZHjmczKMOm",
"YslLvihYAe",
"9sDKegDQpM"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1730517915301,
1731037478586,
1731442563878,
1730210560698,
1730205747672
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1941/Reviewer_gfpY"
],
[
"ICLR.cc/2025/Conference/Submission1941/Reviewer_E4LX"
],
[
"ICLR.cc/2025/Conference/Submission1941/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1941/Reviewer_EhyV"
],
[
"ICLR.cc/2025/Conference/Submission1941/Reviewer_mkpN"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces a new dataset and dual-flow detection framework aimed at addressing the challenges posed by the proliferation of AI-generated images and their potential negative social impacts, such as the spread of fake news.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1: ACID Dataset: The authors present a comprehensive dataset named ACID, which contains 13 million samples sourced from over 50 different generative models and real-world scenarios. The AI-generated images in ACID are created using fine-grained text prompts, and the real-world samples are carefully selected from public data sources based on visual and caption quality, ensuring a broad representation of different image types.\", \"s2\": \"Extensive testing on various AI detectors demonstrates the challenging nature of the ACID dataset. ACIDNet, in particular, shows impressive accuracy of 98.01% on the ACID benchmark, indicating a substantial advancement in the detection of AI-created images.\", \"weaknesses\": \"W1: The dataset construction requires generating thousands of images for each model, which poses scalability challenges, especially for proprietary models that may not allow such extensive access.\", \"w2\": \"The framework proposed in this paper is simply a combination, lacking innovation. For example, it combines the addition of filters in SSP with the traditional backbone + classifier approach.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper is relatively well-motivated as AI-generated image detection is a crucial issue. I also find the evaluations thorough.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.The target issues of the paper are meaningful and worth exploring.\\n2.The motivation is clear. \\n3.The paper is easy to follow.\", \"weaknesses\": \"1.The number of images is small. Only 57693 real images and 42307 fake images. This number of images is smaller than GenImage.\\n\\n2.GAN-based methods are not included in this benchmark.\\n\\n3.Do the detectors trained on ACID benchmark perform well on real datasets? For example, the images collected from fake news on the Internet.\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper proposes a large-scale new dataset called ACID, which consists of 13M samples over 50 different recent generative models and real-world scenarios. The dataset is collected from very recent generative models, such as Stable Diffusion XL, with high resolutions, object categories, and augmentation. Furthermore, the authors propose a baseline for their method termed ACIDNet, which consists of two branches: one semantic branch with ResNetXt50, and a texture branch with high-pass filters for a single simple patch. The experiments on their proposed dataset support their method' effectiveness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The dataset collects images generated from very recent generative models, which should contribute to the related community.\", \"The authors consider several different scenarios, such as the art, unnatural forgery, and post-processing photos, which are very interesting and should be discussed in this field.\", \"The dataset considers many different settings, such as style, and object categories, which is also a issue unaddressed by former datasets.\", \"The proposed detector baseline is effective for detecting AI-generated images, supported by their experiments.\", \"The paper is well-written and easy to follow.\"], \"weaknesses\": [\"How will the proposed dataset be effective or contribute to future research? Since the generative models always evolving, there will be countless new models in the future. The ACID dataset is novel enough for now, but how to make sure for the future? I acknowledge the authors should have spent enough time and effort on collecting the dataset, but it is not enough if it is just a work depending on time. Maybe there are more insights this dataset can give for related future work.\", \"The dataset considers many different scenarios and settings, which is good. Therefore, it is a little confusing to follow all the different settings, category them may be better for reviewers to understand, such as for generalization, for robustness, etc.\", \"For the proposed detector baseline: the resnet branch is a widely-used baseline for image classification, and the texture branch is based on the SSP and Patchcraft, which underestimate the authors own contributions.\"], \"questions\": [\"The authors claim 13M samples for their ACID dataset, but in line 131, they claim 22M images. I don't know whether it is typo.\", \"The authors regard images uploaded on online platform A before 2019 as not AI-created in line 215. But why? How can you make sure there is no generated/manipulated images before 2019?\", \"For the post-processing augmentation, did the authors only employ them for training their ACIDNet? Or they also used them to organize their dataset?\", \"For the simplest patch method, it is a little strange the most discriminative part of an image is the simplest part, since intuitively the more difficult part should also be more difficult to generate. Can the authors provide any proof for this claim beyond two cited previous work?\", \"For comparisons in Tab.4, the authors compare on their proposed benchmark and show the superiority. Did the authors try to evaluate on other previous public benchmarks? This should provide more evidence for the performance.\", \"For Tab.4, did the authors evaluate other detectors by using their pre-trained checkpoints? Or fine-tuning on the proposed datasets? We should make the comparisons as fair as possible.\"], \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": [\"This dataset contains different data sources, the authors should make sure everything is ok for, such as privacy, terms of use.\", \"It could be better to add some ethical discussion on how the dataset and method could impact the community.\"], \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces the ACID dataset, comprising 13 million samples collected from over 50 different generative models and real-world sources, offering a broad range of resolutions. Alongside the dataset, the authors propose ACIDNet, a detection model that combines texture and semantic features. ACIDNet achieves 98.01% accuracy on their dataset, surpassing existing methods (e.g., SSP) by over 10%.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper constructs a large-scale dataset that includes images generated by a variety of generative models, enhancing the dataset's practicality and broad applicability.\", \"weaknesses\": [\"The paper exhibits some deficiencies in its writing logic. The transitions between paragraphs are not sufficiently cohesive, and the internal coherence within some paragraphs is lacking.\", \"When describing the dataset, there is a lack of detailed statistical information about the data distribution, such as the number of generated images from different categories or various generative models.\", \"The paper lacks comparative analysis with other existing datasets in terms of dataset construction; specifically, it could refer to the relevant practices in the GenImage paper.\"], \"questions\": [\"In Table 3, could you provide the parameter settings or random parameter ranges for the following augmentation methods: JPEG Compression, Add Shape, Sharpness Adjustment, Rotation, Color Jitter, Gaussian Blur, and Add Noise?\", \"In Appendix 9 of the AEROBLADE paper, it is revealed that the image storage format in the dataset can lead models to learn compression biases, significantly affecting model performance. What is the image format of your dataset? Did you use a unified image storage format?\", \"In Table 4, the top 7 rows use pretrained models to evaluate the generalization of different models on ACID through inference, while the bottom 3 rows use different methods to train and validate on ACID. Placing these two approaches in the same table can be confusing; I recommend separating them into two tables.\", \"Currently, generated image detection models are not limited to texture and semantic methods. CNNSpot and SSP are not the best-performing detection models. You might consider adding some baselines (e.g., ResNet50, ViT) and some new detection models: DRCT, AEROBLADE, NPR, RIGID, ZED, Fake-Inversion (the first three are open-source, and the others will be open-sourced).\", \"In line 127, you state that \\\"ACIDNet consistently achieves an average accuracy of 81.1%.\\\" How was the 81.1% figure obtained? I only found possibly related data of 86.77% in Table 5.\", \"In Table 5, what is the difference between \\\"Texture branch only\\\" and \\\"SSP (ACID)\\\"?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1OyE9IK0kx | On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models | [
"Sree Harsha Tanneru",
"Dan Ley",
"Chirag Agarwal",
"Himabindu Lakkaraju"
] | As Large Language Models (LLMs) are being increasingly employed in critical domains such as healthcare, it is essential to make these models trustworthy. In this pursuit, Chain-of-Thought (CoT) prompting has emerged as a potential source of transparency in LLMs. While CoT reasoning is appealing to humans, prior studies have shown that these reasoning chains are not faithful i.e.; they do not accurately reflect the underlying LLM's behavior. Ensuring the faithfulness of LLM-generated CoT reasoning is crucial for decision-makers, who rely on them to determine if, when, and to what extent, trust the recommendations made by these models. While several works proposed strategies to enhance accuracy and truthfulness in LLMs, there has been a lack of exploration on the effectiveness of these common strategies to enhance the faithfulness of chain-of-thought (CoT) reasoning. Specifically, we explore the promise of in-context learning, fine-tuning, and activation editing to improve the faithfulness of the CoT reasoning. Our empirical analyses on benchmark tasks indicate that these strategies offer limited success in improving the faithfulness of the CoT reasoning, with only slight performance enhancements in controlled scenarios. Activation editing demonstrated minimal success, while fine-tuning and in-context learning achieved marginal improvements that failed to generalize across reasoning and truthful question-answering benchmarks. We subsequently analyse what makes faithful CoT reasoning challenging, and present findings to lay the groundwork for future research in trustworthy reasoning from LLMs. In summary, our work underscores the inherent difficulty in eliciting faithful CoT reasoning from LLMs, suggesting that the current array of approaches may not be sufficient to address this challenge. | [
"Trustworthy Machine Learning",
"Explainability",
"Interpretability",
"Faithfulness",
"Large Language Models"
] | Reject | https://openreview.net/pdf?id=1OyE9IK0kx | https://openreview.net/forum?id=1OyE9IK0kx | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vsjR7cj9vg",
"t45SaP00qu",
"qGA4R3yxeG",
"pRqB7bSx02",
"nsBZ3ZS41w",
"lYp7LPQJT8",
"lFRUU6FyFo",
"jb4EC5Xbk7",
"i321awKzXT",
"f1P6TJKQp9",
"eRN7R9PpaO",
"Y4170TVJVB",
"WSjvPjQsBP",
"VBwVlQFnnS",
"Ur87kemuKc",
"UWiJC2MdaF",
"PMcJngYuZa",
"NtcvanR3Z6",
"NLlBoedP6i",
"MOSWGClj5o",
"Eo8CpDqqhX",
"DxhbrfU8PI",
"5uBZ0pvJvC",
"4MeKqiIxnw",
"2SIXg90Jo3",
"1ihgIuhmLB",
"0OTgAT1AGm"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732519094095,
1729952049770,
1732562372303,
1730716730628,
1732547860054,
1730689877582,
1737524192761,
1733068284216,
1732518577270,
1733077445626,
1732684087176,
1733080721512,
1730694498769,
1732731192420,
1732575544489,
1732743883319,
1732520357503,
1732519712418,
1734771960776,
1732675430014,
1733023657470,
1732519421753,
1730030274532,
1732562470055,
1730620792615,
1732738268981,
1732519406768
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_De5a"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_hqPm"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_gNfW"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_gNfW"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_hqPm"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_De5a"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_bkgd"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_fxK8"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Area_Chair_F1oD"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_qPR7"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_hqPm"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_fxK8"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Reviewer_qPR7"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12455/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer bkgd\", \"comment\": \"Thank you for your thoughtful review of our paper and for recognizing the scope of our comprehensive experimental evaluations, aimed at uncovering the reasons behind limitations in existing intervention strategies for faithfulness. We would like to address the weaknesses you pointed out and provide clarification on the questions raised.\\n\\n**Q1: What is the formal definition of faithful (CoT) reasoning in LLMs?**\\n\\nFaithfulness represents how well a model\\u2019s expressed reasoning matches its internal reasoning. What constitutes internal reasoning for LLMs is a vastly complex topic, and as such can\\u2019t be reduced to a simple formal definition. It varies greatly across different faithfulness metrics (see lines 141-148). We will emphasize this aspect in our text to make it clearer to readers. Thank you pointing this out.\\n\\n**W1: The proposed solution is simply combining applied techniques in LLMs, which render the contribution incremental and straightforward**\\n\\nWe appreciate the reviewer\\u2019s viewpoint that we are utilizing existing techniques. The main purpose of our investigation is indeed to assess the abilities of *existing* intervention strategies, specifically on faithfulness. Ensuring the faithfulness of LLM-generated CoT reasoning is crucial for decision-makers, such as doctors, who rely on them to determine if, when, and how much to trust the recommendations made by these LLMs. It is therefore important to understand when and why these methods fail. Previous seminal works [1-4] have also explored whether common strategies are effective at solving certain problems across domains. **The novelty and significance of these kinds of papers do not stem from a new approach or method** but rather the significant insight that none of the popular approaches work for this problem, and we need a new set of approaches that can address this problem.\\n\\nThe novelty in our contribution is the following. We illustrate that these methods achieve limited success, and we also provide significant evidence as to why these methods are (fundamentally) limited in the context of faithfulness. **We kindly refer the reviewer to our global response for details**. We in turn systematically evaluate across several novel example selection strategies for ICL/FT, finding that none are able to consistently improve faithfulness across domains. While conventional finetuning appears promising, faithfulness is a unique type of property that assesses a model\\u2019s internals and cannot be learnt as a supervised task.\\n\\n**W2: The aforementioned techniques have shown several limitations, in past works**\\n\\nThank you for the feedback. We would like to clarify that fine-tuning and in-context learning are the widely used techniques to adapt pre-trained LLMs to specialized downstream tasks [5-10]. While we agree that there are existing limitations of the aforementioned techniques, it would be great if the reviewer could clarify which limitations would pose known challenges to faithful CoT reasoning.\\n\\n**W3: Several notions and techniques that this work builds upon, are not formally defined or described earlier in the paper, making it less accessible to a broader audience**\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the formal definition of CoT. We are actively resolving this in the updated text. If the reviewer could kindly provide specific examples of ambiguity we would be more than happy to address them and use them to strengthen our paper\\u2019s readability. Thank you for helping us improve our writing.\\n\\nWe are actively incorporating the above explanations to enhance the quality of our paper. We believe that we have addressed all the concerns. If there is any aspect that you feel has not been fully resolved, we would be happy to provide further information. If you are satisfied with our response, we would truly appreciate your consideration in raising your evaluation score.\\n\\n**References**\\n\\n[1] Goodfellow et al. Explaining and Harnessing Adversarial Examples. ICLR, 2015.\\n\\n[2] Bolukbasi et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. NeurIPS, 2016.\\n\\n[3] Adebayo et al. Sanity Checks for Saliency Maps. NeurIPS, 2018.\\n\\n[4] Jain et al. Attention is Not Explanation. ACL, 2019.\\n\\n[5] Dettmers et al. Qlora: Efficient finetuning of quantized llms. 2023.\\n\\n[6] Hu et al. LoRA: Low-rank adaptation of large language models. In ICLR, 2022.\\n\\n[7] Jeong et al. Domain-specialized llm: Financial fine-tuning and utilization method using mistral 7b. In Journal of Intelligence and Information Systems, 2024.\\n\\n[8] Kumar et al. Fine-tuning, quantization, and llms: Navigating unintended outcomes. arXiv, 2024.\\n\\n[9] Rafailov et al. Direct preference optimization: Your language model is secretly a reward model. In NeurIPS, 2023.\\n\\n[10] Singh et al. Whispered tuning: Data privacy preservation in fine-tuning llms through differential privacy. Journal of Software Engineering and Applications, 2024.\"}",
"{\"summary\": \"The paper examines the difficulty of making large language models produce reasoning that accurately reflects their internal processes. It tests methods like in-context learning, fine-tuning, and activation editing and finds they only marginally improve a model's ability to produce faithful reasoning. The study concludes that current techniques are insufficient to ensure reasoning transparency in language models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper tackles the issue of enhancing the faithfulness of reasoning in large language models, which is vital for applications requiring high reliability.\\n\\n2. The study is methodologically sound, with rigorous experiments across different models and datasets, demonstrating the limited effectiveness of current strategies in improving reasoning faithfulness.\\n\\n3. The findings are impactful, highlighting the need for new methodologies to make LLMs more transparent and trustworthy, which is crucial for their adoption in high-stakes domains.\", \"weaknesses\": \"1. The study focuses on a limited number of benchmarks. It would benefit from expanding the range of datasets to better understand how these findings generalize across different types of reasoning tasks and domains.\\n\\n2. The paper could benefit from a more robust theoretical framework that explains why certain strategies might improve faithfulness while others do not.\", \"questions\": \"Please refer to the Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer fxK8 (1/2)\", \"comment\": \"Thank you for your thoughtful review of our paper. We are glad to hear that you believe our paper provides a strong basis for future exploration. We would like to address the weakness you pointed out and provide clarification.\\n\\n**W1/W4: Application of existing techniques and concrete next steps**\\n\\nWe appreciate the reviewer\\u2019s viewpoint that we are utilizing existing techniques. The main purpose of our investigation is indeed to assess the abilities of *existing* intervention strategies, specifically on faithfulness. Ensuring the faithfulness of LLM-generated CoT reasoning is crucial for decision-makers, such as doctors, who rely on them to determine if, when, and how much to trust the recommendations made by these LLMs. It is therefore important to understand when and why these methods fail. The key contribution of our work is to explore if popular approaches that found success in modifying LLM outputs to improve properties like accuracy [1] and truthfulness [2] can improve the faithfulness of CoT reasoning generated by LLMs. Previous seminal works [3-6] have also explored whether common strategies are effective at solving certain problems across domains. **The novelty and significance of these kinds of papers do not stem from a new approach or method** but rather the significant insight that none of the popular approaches work for this problem, and we need a new set of approaches.\\n\\nWe are happy to release our coding framework (supplemental material) upon publication, which includes extensive pipelines that lay the groundwork for assessing faithfulness, using both OpenAI APIs and Hugging Face GPU implementations. Please note that this is a rapidly evolving field with many metrics / understandings of faithfulness and our work provides a strong datapoint upon which to build, and performing evaluations so extensively is not cheap. As a result, we uncover in-depth insights concerning the limitations of ICL (word mimicking, incentivization of late change in reasoning), and the fundamental uniqueness of trying to finetune for faithfulness (please see W3 below and our **global response**).\\n\\n**W2: On the particular faithfulness metric used**\\n\\nMeasuring faithfulness of reasoning without having access to a black box is not straightforward, and hence, several works propose tests to evaluate faithfulness. Note that each test only evaluates an explanation of a particular property. We use the early answering test proposed by Lanham et. al. (2023) to measure faithfulness. The premise is that if reasoning is not post-hoc, there are fewer ways for it to be unfaithful than there are for reasoning which is post-hoc. While there are other possible faithfulness measures, they have their limitations as shown below (Table 1, Appendix).\\n\\n| Strategy | Description | Limitations |\\n| ----------- | --------------- | --------------- |\\n| Counterfactuals | If features referenced by an explanation are removed, then the model's prediction should change. | More relevant for feature importance explanations than CoT. |\\n| Adding Mistakes | If inserting a mistake into the CoT changes the model's final answer, then the model is likely not ignoring the CoT. | Dependent on external factors of generating mistakes, which influences faithfulness values. Difficult to ablate across mistakes. |\\n| Paraphrasing\\t | If information encoded in phrasing choices of the reasoning are responsible for the change in the answer, rather than the content of the CoT itself, then the CoT is unfaithful. | Dependent on external factors to paraphrase steps, which influences faithfulness values. Difficult to ablate across paraphrases. |\\n\\nUnlike the 'Adding Mistakes', and 'Paraphrasing' strategies, the 'Early Answering' strategy uses the generated CoT only from the model to measure faithfulness, thereby avoiding reliance on an external model/mechanism to evaluate faithfulness. However, we also looked at the 'Adding Mistakes' and 'Paraphrasing' strategies to measure faithfulness and found that both these measures are highly correlated with faithfulness from the 'Early Answering' strategy, shown in Fig 11 (Appendix). Our observation is also consistent with the finding reported in Lanham et. al. (2023).\\n\\nThank you for your comments. We will move Table 1 (above) and Fig 11 from the appendix to the main text, to prioritize the reviewer\\u2019s concerns (they remain in the appendix for now for identification purposes).\"}",
"{\"summary\": \"In recent years, there have been concerted effort in making language models more faithful and robust with methods such as finetuning, in-context learning and activation editing. This work investigates whether these 3 methods can make CoT reasoning more faithful. Their findings suggest that all of them achieve very limited performance improvements, with activation editing achieving only minimal improvements. Finetuning and in-context learning can be slightly more effective, though they seem to fail to generalize across tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Overall, I thought the paper was strong and has potential for broad impact, because it connects so many concepts that are disparately considered. There has been a significant gap in evaluating *for* interventions, and this work systematically investigates the common and practical techniques for interventions.\", \"S1. Comprehensive evaluation of intervention methods for a widely used technique, CoT reasoning. Since this is how many researchers as well as practitioners interact with LLMs, this work is widely applicable and can have broad impact in considerations for AI safety.\", \"S2. I thought the introduction was particularly well motivated and the paper was generally well written.\", \"S3. Finetuning strategies were tested with multiple sampling strategy of their design. Adding faithfulness metric to the finetuning dataset creation was a particularly convincing experimental strategy.\", \"S4. Also introduces novel strategy for activation editing based on aligning on faithfulness vectors\", \"S5. The paper includes salient results, with most of these methods getting partial success. ICL or activation editing seem to get either accuracy or faithfulness performance enhancements, but rarely both. It seems that more finetuning on faithful datasets can improve both more so than ICL and activation editing\"], \"weaknesses\": [\"W1. It seems that activation editing was only experimented with LLaMA-3B. I wonder if this could have been an issue with this particular model, particularly because activation editing could have vastly different results depending on the architecture. For that reason, I think this result could be made more robust by adding other models for comparison such as Gemma or OLMo.\", \"W2. \\\"Fine-tuning using most faithful explanations achieve better accuracy-faithfulness trade-offs.\\\" This seems like an expected result, but I wonder if this holds true across domain. If there could have been a more comprehensive strategy such as sampling by length for comparison, I wonder if there were any observable differences across domain.\", \"W3. There's slew of methods proposed by lanham et al, but I think this paper only discusses faithfulness with respect to early answering strategy. Faithfulness metric could result in different behavior based on the metric definitions: early answering vs. adding mistakes vs. paraphrase.\", \"W4. The faithfulness based activation editing strategy was introduced, but the results on it were not included in the paper.\"], \"questions\": [\"Q1. Do you expect activation steering to be more or less effective for other models/architectures?\", \"Q2. Will you be releasing code/data for how faithfulness was calculated in this particular case?\", \"Q3. Do you expect your results to be consistent across how faithfulness metric was defined? So, for example, experimenting with faithfulness metric with paraphrasing vs. early answering strategy?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you authors for the point-to-point and global comment. I have no outstanding concerns left and have raised my points accordingly.\"}",
"{\"summary\": \"The paper systematically examines the Chain-of-Thought (CoT) behavior in large language models (LLMs) through experiments involving in-context learning, fine-tuning, and activation editing. The results indicate that activation editing had limited success, while in-context learning and fine-tuning led to only slight, non-generalizable improvements. The authors argue that the training process of these models does not prioritize faithfulness, which contributes to the generation of more self-aware content.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Detailed Experiment** The paper conducted thorough experiments across in-context learning, fine-tuning, and activation editing.\\n\\n2. **Insights from the Experiment** The empirical experiments provided meaningful insights, which might inform a better LLM alignment methodology for achieving faithful intermediate states in the future.\", \"weaknesses\": \"**Novelty in methodology** is the weakness of this position paper. As author(s) stated, changes are made for referenced methods/procedures, some theoretical/numerical supports can better validate the proposal. Please let me know if the following points are biased.\", \"here_are_some_directions_to_consider\": \"1. To measure faithfulness, the Area Over the Curve (AOC) metric from [1] is adopted while the paper proposed to use probability scores for each instance instead of on the dataset level. However, section 2.3.1 of [1] also stated \\\"AOC values are calculated as a weighted sum\\\", thus [1] should also work on the instance level. I suggest editing line 166 to prevent confusion if this is the case.\\n2. For activation editing, this work selected top-K heads based on faithful probing results instead of top-K truth-relatedness heads in [2], they sound serving similar purposes to me. Can we compare these methods or see if they are transferable?\\n\\nReference\\n[1] Lanham, T., Chen, A., Radhakrishnan, A., Steiner, B., Denison, C., Hernandez, D., ... & Perez, E. (2023). Measuring faithfulness in chain-of-thought reasoning. arXiv preprint arXiv:2307.13702.\\n[2] Li, K., Patel, O., Vi\\u00e9gas, F., Pfister, H., & Wattenberg, M. (2024). Inference-time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems, 36.\", \"questions\": \"1. What is the sample size of the benchmark? Correct me if I am wrong but lines 339 - 348 describe original datasets' statistics instead.\\n2. When selecting N ICL demonstrations, are we considering questions' similarities or just using faithfulness as the single index?\", \"minor\": \"1. Figures' notation requires browsing around.\\n2. Please avoid directly using acronyms, a full expression would be more reader-friendly. e.g. out of distribution for OoD in line 303 \\n3. Please check typos in the manuscript, such as:\\na. line 312, Figure 4?\\nb. line 354 asking the question *without* invoking?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you for responding to our rebuttal! We are excited to hear that we were able to address your concerns. While we note the point of limited faithfulness definition by the reviewer, we would like to clarify that this is one of the key conclusions of our work, i.e., the faithfulness metrics derived from the current faithfulness definition in explainability literature do not result in improving the faithfulness of the reasoning generated by LLMs. Hence, we conclude that the community needs to propose better metrics and definitions of faithfulness.\\n\\nIn summary, we would like to highlight again that the point raised by the reviewer is essentially the conclusion of our work. We will include this clarification in our paper's experiment and conclusion section. Please let us know if we could clarify any remaining concerns, and we would greatly appreciate it if you consider increasing your rating of our paper.\"}",
"{\"title\": \"Global Response\", \"comment\": \"Here, we demonstrate *why faithfulness is a fundamentally difficult property to optimize for* and *why existing intervention techniques provide limited success*.\\n\\n## Why finetuning for model faithfulness is difficult\\n\\nFinetuning involves gradient descent on some original model $\\\\theta_0$ and loss, $L = \\\\frac{1}{n}\\\\sum_i \\\\ell (x_i, y_i)$ to yield $\\\\theta_1 = \\\\arg\\\\min_\\\\theta L$. Most model properties operate on fixed ground truths (i.e., static $x_i, y_i$ pairs or completions).\\n\\n**Finetuning for faithfulness shifts model internals**: Faithful explanations are by design defined w.r.t. a particular model. The faithfulness of an explanation is not fixed across models, unlike other properties such as accuracy, robustness. To demonstrate, we finetune Llama-3-8B-Instruct, $\\\\theta_0$, on faithful demonstrations w.r.t. $\\\\theta_0$, to yield $\\\\theta_1$. We then use explanations from $\\\\theta_1$ to measure faithfulness w.r.t. both $\\\\theta_1$ and $\\\\theta_0$.\\n\\n| Dataset/Model | DU | DU (c) | DF | DF (c) | SU | SU (c) | SF | SF (c) |\\n|-|-|-|-|-|-|-|-|-|\\n| AQuA - Faith. wrt. original $\\\\theta_0$ | **0.623** | **0.615** | **0.636** | **0.600** | **0.653** | **0.640** | **0.620** | **0.625** |\\n| AQuA - Faith. wrt. finetuned $\\\\theta_1$ | 0.529 | 0.488 | 0.608 | 0.588 | 0.584 | 0.527 | 0.555 | 0.617 |\\n| |\\n| LogiQA - Faith. wrt. original $\\\\theta_0$ | **0.405** | **0.383** | **0.409** | **0.406** | 0.427 | 0.435 | **0.443** | **0.411** |\\n| LogiQA - Faith. wrt. finetuned $\\\\theta_1$ | 0.372 | 0.339 | 0.363 | 0.383 | **0.445** | **0.453** | 0.415 | 0.362 |\\n| |\\n| TruthfulQA - Faith. wrt. original $\\\\theta_0$ | **0.239** | **0.222** | **0.252** | **0.224** | **0.263** | **0.264** | 0.253 | 0.247 |\\n| TruthfulQA - Faith. wrt. finetuned $\\\\theta_1$ | 0.184 | 0.187 | 0.239 | 0.219 | 0.225 | 0.242 | **0.265** | **0.248** |\\n\\nOur results show that faithfulness w.r.t. $\\\\theta_1$ is notably lower than faithfulness w.r.t. $\\\\theta_0$. This is intuitive since faithful demonstrations are provided from $\\\\theta_0$. What this does strongly suggest is that (question, explanation) pairs that are faithful w.r.t. $\\\\theta_0$, may no longer be guaranteed to be faithful w.r.t. a finetuned model $\\\\theta_1$ (in our findings, they are decidedly more faithful w.r.t the original model). This ultimately supports our claim that **faithfulness is fundamentally difficult to target using finetuning.**\\n\\nN.B. This is implied in Fig 13 of the Appendix, where finetuning shifts the model's early-answer probabilities despite yielding identical/semantically similar reasoning for each CoT step.\\n\\n## Limitations of in-context learning (ICL)\\n\\n1. **ICL mimics explicit words in its examples, instead of learning implicit faithfulness properties:** We measure Jensen-Shannon Divergence between the in-context examples and the CoT response from a) zero-shot (ZS) and b) ICL. Lower is more similar (bolded). In almost all cases, word distributions from ICL CoT are more similar to the in-context examples than ZS CoT words.\\n\\nDataset | DU | DU (c) | DF | DF (c) | SU | SU (c) | SF | SF (c) |\\n|-|-|-|-|-|-|-|-|-|\\nAQuA (Examples vs ZS) | **0.4658** | **0.4589** | **0.4639** | **0.4779** | **0.4519** | **0.4517** | **0.4620** | **0.4769** |\\nAQuA (Examples vs ICL) | 0.4664 | 0.4664 | 0.4690 | 0.4873 | 0.4617 | 0.4602 | 0.4675 | 0.4870 |\\n| |\\nLogiQA (Examples vs ZS) | 0.5339 | **0.5224** | **0.5326** | **0.5576** | **0.5282** | **0.5302** | **0.5263** | **0.5447** |\\nLogiQA (Examples vs ICL) | **0.5288** | 0.5288 | 0.5386 | 0.5665 | 0.5364 | 0.5353 | 0.5343 | 0.5555 |\\n| |\\nTruthfulQA (Examples vs ZS) | **0.5162** | **0.5138** | **0.5199** | **0.5190** | **0.5095** | **0.5185** | **0.5116** | **0.5119** |\\nTruthfulQA (Examples vs ICL) | 0.5208 | 0.5208 | 0.5259 | 0.5239 | 0.5221 | 0.5228 | 0.5237 | 0.5180 |\\n\\n2. **Optimizing for faithfulness can have tradeoffs with accuracy.** While ICL improved faithfulness in some approaches, there is often a trade off in accuracy as shown in Figs 5, 7, 15. This is due to the metric incentivizing changes in label predictions deep into reasoning (Fig 13). Observe the original model's CoT on the number of human finger bones that leads to the correct answer. In this case, ICL has induced reasoning that sways the model's idea of the final answer throughout. In particular, our observations tend to reveal that a late change in reasoning from the model is a typical aspect of faithful CoT that can ultimately get optimized. Fig 3 demonstrates how faithfulness can be at odds with accuracy as a result of this.\\n\\n### Remarks\\n\\nWe thank the reviewers for requesting more information on why faithfulness is fundamentally difficult to achieve. We provide further results in the tables above, from carefully designed experiments, to support the case that existing techniques require rethinking specifically for the case of faithfulness. We appreciate your patience as we actively incorporate the above results and transfer the appendix analyses to the main text.\"}",
"{\"comment\": \"I'd like to point out that, while the inconsistency in hardness is the conclusion of your work, only considering one metric of faithfulness is perhaps premature to arrive at that conclusion. Your rebuttal pointed this out as well,\\n> However, we do note that identifying fundamental challenges in applying interventions to one faithfulness measure (early answering) sheds much light on how optimizing for faithfulness can be severely limited. \\n\\nI think it's less convincing if the definition of faithfulness is a moving target/not fully explored.\"}",
"{\"comment\": \"Thanks for the response.\"}",
"{\"comment\": \"Thank you for the discussion, Reviewer hqPm. We appreciate you taking the time to discuss our rebuttal.\"}",
"{\"summary\": \"Recent advances of foundation models, in particular Large Language Models (LLMs) have demonstrated impressive performances in many natural language processing tasks. Nevertheless the capabilities of LLMs at reasoning tasks are still limited and raises significant debate (see [1]). A line of recent works proposed prompt-based techniques to improve LLM capability including, but not limited to, reasoning. Notably, the most popular techniques are: *chain-of-thought* (CoT) by adding the phrase 'think/solve step by step' at the end of the prompt, and *in-context learning* by including illustrative examples in the prompt to inspire or assist the LLM about the specific context of the query to solve; another line focuses on fine-tuning the LLM on formal reasoning benchmarks data, mathematical problems (Algebra, Geometry, calculus and so on).\\n\\nThis work combines the three aforementioned techniques to improve LLMs in producing what is referred to as *faithful* CoT reasoning and rational explanations to the delivered output. Moreover, it define a metric to assess the concept of faithful CoT reasoning. \\n\\n\\n\\n[1] Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, Mehrdad Farajtabar: GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models. EMNLP 2024\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper targets an important and timely challenge in LLM. While massive effort is dedicated towards enhancing LLM's capability to reason or even demonstrating that it can reason, it still represents a major bottleneck and prevents using LLM to create AGI.\\n\\nOverall, the paper reads well and is well organised. The overall contribution is more technical and focuses on empirical studies of various combined techniques implemented, resulting in comprehensive experimental evaluations.\", \"weaknesses\": [\"The paper constitutes incremental research work. The proposed solution is simply combining applied techniques in LLMs, which render the contribution incremental and straightforward. Technically, the contribution lacks in rigor, and many of the applied strategies are not formally justified.\", \"The aforementioned techniques have shown several limitations, in past works, and more importantly in many cases techniques like activation patching are deteriorating the LLMs accuracy.\", \"Several notions and techniques that this work builds upon, are not formally defined or described earlier in the paper, making it less accessible to a broader audience.\"], \"questions\": [\"What is the formal definition of faithful (CoT) reasoning in LLMs? Unless, I am missing something this was stated to be formally defined in line 93, but I fail to find this definition later in the manuscript.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for the authors' global responses and specific explanations towards my initial questions. After reading all the responses as well as feedbacks from other reviewers, I still have some concerns about this work.\\n\\nWhile the paper explains why certain alternative metrics (e.g., counterfactuals or paraphrasing) were not chosen, it remains unclear why \\u201cearly answering\\u201d is a reasonable metric for measuring faithfulness, particularly for black-box LLMs. Specifically because: (1) Faithfulness, as noted in the authors\\u2019 response, lacks a clear formal definition due to the complexity of internal reasoning in LLMs (\\u201cWhat constitutes internal reasoning for LLMs is a vastly complex topic, and as such can\\u2019t be reduced to a simple formal definition\\u201d). Without a clear foundation, how could the authors ensure that \\\"early answering\\\" meaningfully represents faithfulness? (2) For the metric itself, the idea that truncating reasoning steps alters a model's outputs is intriguing but does not convincingly establish that this alteration correlates with internal faithfulness rather than being an incidental behaviour.\\n\\nAs I understand it, this paper mainly reports the results of existing techniques including in-context learning, fine-tuning and activation editing to control the \\u201cearly answering\\u201d. Such a \\u201cfinding\\u201d is not novel or solid enough to sorely make the paper get accepted. Not only as some insights could already be seen from the previous work by Lanham et. al. (2023), but also the less-satisfying results of CoT or fine-tuning are more or less expectable, as those techniques were not initially designed for \\\"not early answering\\\", but for improving the end-to-end performance. Beyond that, the paper did not provide either clear approaches to improve the so-called \\u201cfaithfulness\\u201d or fundamental analyses about what caused this issue of unfaithfulness.\\n\\nIn this sense, I would agree with the point raised by reviewer qPR7 that the paper did not go deep enough, but more in the aspects that (1) what essentially constitutes the LLMs\\u2019 faithfulness and how should we measure it; and (2) what is the best (provable) practice to improve faithfulness, which should not be limited by existing techniques like CoT or fine-tuning. \\n\\nGiven all these concerns, I would maintain my current score unless further evidence is provided that are more reasonable or convincing. I would encourage the authors to expand their analysis and propose more innovative approaches in future iterations of this work, and thanks again for the efforts.\"}",
"{\"title\": \"Response to Reviewer hqPm\", \"comment\": \"Thank you for your thoughtful review of our paper. We are happy that you recognize our paper\\u2019s contributions towards connecting many disparate concepts and filling a significant gap in evaluating for interventions. We would like to address the weaknesses you pointed out and provide clarification on the questions raised.\\n\\n**W1/Q1: Do you expect activation steering to be more or less effective for other models/architectures?**\\n\\nWe appreciate the suggestion to trial AE on other models. We are actively collecting results for Gemma and appreciate your patience.\\n\\n**W2: Finetuning with most faithful explanations vs finetuning by length?**\\n\\nThank you for the suggestion to finetune according to length, which would provide good insights into how CoT length affects faithfulness across different domains. We kindly refer the reviewer to **Fig 11** (Appendix D.1, page 16), where we inspect test examples on which faithfulness remain the same/improved and observe that the average number of CoT reasoning steps used by each model generally increased (7/9 cases for ICL and 5/6 cases for FT). Models often invoked more granular CoT reasoning steps to improve faithfulness according to the early-answering metric. Mechanically, this is likely to increase the chances of a mismatch between intermediate and final answer probabilities (and thus the AOC). As such, we do not necessarily wish to optimize for faithfulness by learning shortcuts on the metric such as the above.\\n\\nAdditionally, please see our **global response** for evidence of why finetuning is a fundamentally difficult challenge in the context of faithfulness, which we deem particularly relevant to this discussion.\\n\\n**Q2: Will you be releasing code/data for how faithfulness was calculated in this particular case?**\\n\\nAbsolutely! We are happy to release our coding framework upon publication, which includes extensive pipelines that lay the groundwork for assessing faithfulness, using both OpenAI APIs and Hugging Face GPU implementations.\\n\\n**W3/Q3: Do you expect your results to be consistent across different faithfulness metrics?**\\n\\nThis is a great question. We conducted additional experiments to evaluate faithfulness on the AQuA dataset using strategies proposed by Lanham et. al (2023). We looked at the *'Adding Mistakes'* and *'Paraphrasing'* strategies to measure faithfulness and find that both these measures are highly correlated with faithfulness from the *'Early Answering'* strategy, shown in **Fig 12** in the pdf (Appendix D.2, page 17). Our observation is also consistent with the findings reported by Lanham et. al. (2023).\\n\\nHowever, we do note that identifying fundamental challenges in applying interventions to one faithfulness measure (early answering) sheds much light on how optimizing for faithfulness can be severely limited. While extensive, unified analysis across further metrics would enhance the paper, we believe this belongs squarely in future work and does not detract significantly from our main illustrations regarding fundamental issues of faithfulness (namely, as per the global response, finetuning is a moving target, and ICL mimics word rather than learning intrinsic faithfulness, or learns other salient shortcuts such as increasing CoT granularity).\\n\\n**W4. The faithfulness based activation editing strategy was introduced, but the results on it were not included in the paper.**\\n\\nThe results for activation editing strategy are presented in Section 4.2.3. Figure 10 shows how faithfulness and accuracy vary with the intervention hyper-parameters - number of heads $K$ and strength of intervention $\\\\alpha$.\\n\\nThank you once again for your insightful suggestions and comments. We are actively incorporating the above explanations to enhance the quality of our paper. We believe that we have addressed all the concerns, barring additional activation editing results. If there is any aspect that you feel has not been fully resolved, we would be happy to provide further information. If you are satisfied with our response, we would truly appreciate your consideration in raising your evaluation score.\"}",
"{\"title\": \"Activation Editing results on Gemma-7B-IT and Mistral-7B-Instruct\", \"comment\": \"Thank you for your patience.\\n\\nWe appreciate the suggestion to trial Activation Editing strategy on other models. While we agree with the reviewer that effectiveness of activation steering can vary across models and attention mechanisms, Llama-3-8B-Instruct was a good starting point as it has proven to be effective in Li et al. (2023). We have performed activation editing (AE) experiments on Gemma-7B-IT and Mistral-7B-Instruct to make our results more robust as per the reviewer's requests. Please see page 22 and 23 of the updated manuscript. Gemma-7B-IT and Mistral-7B-Instruct are models of similar size as Llama-3-8B-Instruct but different architectures and attention mechanism. Gemma-7B-IT uses the well known self attention and Llama-3-8B uses grouped query attention (key and value heads are shared across a group of attention heads). In addition to grouped query attention, Mistral-7B-Instruct uses sliding window attention for efficient scaling to very long context windows. \\n\\nFor all three datasets (AQuA, LogiQA, TruthfulQA), we ablate over the number of intervened heads ($K$) and the intervention strength $\\\\alpha$. As was the case with the Llama model, none of the intervention configuration leads to improvement of both accuracy and faithfulness. There is a significant trade-off in accuracy as faithfulness improves, and intervening beyond 8 attention heads often leads to illegible responses.\"}",
"{\"title\": \"Response to Reviewer qPR7\", \"comment\": \"Thank you for your thoughtful review of our paper, and for recognizing our main contribution that other intervention strategies may be required to achieve faithfulness, based on our investigation of existing techniques. We would like to address the weakness you pointed out and provide clarification.\\n\\n**W1/Q1: Insights on why faithfulness is difficult to learn, either in the form of mathematical theorems, or carefully designed experiments would be helpful**\\n\\nThis is an important point and one that we did not highlight enough in the main text, though we have thought extensively about it. The goal of faithful Chain of Thought (CoT) reasoning involves fundamentally altering the model to generate reasoning that is more consistent with its internal decision-making processes. This represents an intrinsic change in the model's behavior, rather than simply learning a new task. This poses significant challenges when trying to learn faithful CoT reasoning, since there is no guarantee that any faithful examples used will remain faithful once model parameters are adjusted. In other words, learning faithfulness poses non-stationary/moving targets.\\n\\n**New experiments**\\n\\nWe are happy to provide a carefully designed experiment to demonstrate empirical evidence of this. Please refer to our **global response** for full details. We measure the faithfulness of the finetuned model\\u2019s CoT explanations with respect to both itself and the original model. This does not make sense in other domains with static datasets, but for faithfulness is crucially insightful, where there do not exist ground truth faithful (question, explanation) pairs. Our findings demonstrate that once finetuning occurs, model internals shift, and what constitutes faithfulness changes. This is analogous to chasing a dynamic/moving target and poses fundamental challenges to standard static finetuning.\\n\\nAs it turns out, we made findings on the fundamental challenges of finetuning for faithfulness and the limitations of ICL, in Appendices D3 and D4, but failed to prioritize their relevance correctly for the main text. Therefore, we thank the reviewer sincerely for bringing this to our attention. We have dedicated a section in the main text to these findings.\\n\\n**Closing Remark**\\n\\nIt would be great if you can let us know if you have any additional concerns, and we will be happy to respond. Should the insights satisfy your queries on why faithfulness is difficult to learn, we would strongly appreciate a vote of acceptance for our paper.\"}",
"{\"title\": \"Response to Reviewer De5a\", \"comment\": \"Thank you for your thoughtful review of our paper. We are delighted that you recognize the importance of the issue of faithfulness, the soundness of our investigation, and the impact of our findings. We would like to address the weaknesses you pointed out and provide clarification on the questions raised.\\n\\n**W1: Number of benchmarks**\\n\\nWe appreciate the reviewer\\u2019s suggestion to expand datasets/domains. This would surely enhance the paper\\u2019s original findings, though we do believe that this belongs in future work for two reasons. First, the objective of our study was to demonstrate that conventional strategies are not guaranteed to improve faithfulness, and second, we are interested in reasons why faithfulness is fundamentally hard to achieve. To do so, we emphasized a breadth of systematic investigation across many potential strategies for finetuning or in-context learning example selection, across reasoning vs non-reasoning datasets, in order to find failure cases and explanations as to why. We value the suggestion to explore other domains for additional insights in follow-up work.\\n\\n**W2: Explanations for success and failure of faithfulness**\\n\\nThis is a very important point, to which we have dedicated our **global response** to. In fact, many explanations we originally had of why faithfulness is hard to achieve, were not prioritized correctly in the main text (see Appendix D). In general, faithfulness is not guaranteed. We thank the reviewer for pointing out the importance of these results for the main text.\"}",
"{\"metareview\": \"The paper examined the Chain-of-Thought (CoT) behavior in large language models (LLMs) through experiments involving in-context learning, fine-tuning, and activation editing. This work combines the three aforementioned techniques to improve LLMs in producing what is referred to as faithful CoT reasoning and rational explanations to the delivered output. The authors argue that the training process of these models does not prioritize faithfulness, which contributes to the generation of more self-aware content. The paper targets an important and timely challenge in LLM with detailed experiment design. While there are several major concerns remain regarding the novelty of the proposed method and the investigation of the empirical results. The proposed solution is combining applied techniques in LLMs, which render the contribution incremental and straightforward. Technically, the contribution lacks in rigor, and many of the applied strategies are not formally justified. Furthermore, there is lack of insights on the empirical results. For instance, why faithfulness is difficult to learn, either in the form of mathematical theorems, or carefully designed experiments would be helpful. And it remains unclear why \\u201cearly answering\\u201d is a reasonable metric for measuring faithfulness, particularly for black-box LLMs. Given the above reasons, after discussion with the reviewers, we would encourage the authors to expand their analysis and propose more innovative approaches in future iterations of this work for resubmission.\", \"additional_comments_on_reviewer_discussion\": \"Though part of reviewers' concerns have been resolved during rebuttal, the main concerns from reviewers remain regarding the novelty of the proposed method and the investigation of the empirical results. The proposed solution is combining applied techniques in LLMs, which render the contribution incremental and straightforward. Furthermore, there is lack of insights on the empirical results. Given the above reasons, after discussion with the reviewers, we would encourage the authors to expand their analysis and propose more innovative approaches in future iterations of this work for resubmission.\"}",
"{\"comment\": \"Thank you for your response and additional explanation on why fine-tuning and in-context learning did not help faithfulness in your work. As I understand it, the loss function used for fine-tuning in the work aims to improve the accuracy in explanation and the final answer, and not for improving faithfulness. Given that, it is not surprising that it did not improve faithfulness. I would maintain my view that the paper did not go deep enough in providing understanding on whether faithfulness is difficult to learn, even if you were to design methods, e.g. loss functions, for learning it.\"}",
"{\"comment\": \"Hi authors, thank you so much for such thoughtful discussions and additional results on activation editing on such a short notice. I also feel that my comments and questions were adequately addressed. I also think that the global responses were particularly helpful for expanding upon the faithfulness discussions on ICL.\\n\\nI believe that this paper addresses an important question of the consistency regarding faithfulness, and I personally think the authors considered many experiments from varying methods (finetuning, ICL, various metrics, various settings, across many tasks). However, my concern is mostly with the limited definition of the faithfulness itself (only one way of defining/measuring faithfulness). Overall, I maintain my favorable assessment of this paper.\\n\\nThanks to the authors for an interesting paper, and please let me know if there are any misunderstanding on my part!\"}",
"{\"title\": \"Response to Reviewer gNfW (2/2)\", \"comment\": \"**Q1: What is the sample size of the benchmark?**\\n\\nThe sample size of each benchmark was 400 training examples from which to select responses (with 10 responses sampled per question in the case of 0.3 temperature sampling), and 100 test questions per dataset on which to evaluate faithfulness. This is motivated by a) high API costs and b) valuing breadth of evaluation across the several strategies that we introduce, rather than depth in test sample size (our evaluations show that mean test set faithfulness converged, on average, after around 60 to 80 of the 100 samples).\\n\\n**Q2: When selecting N ICL demonstrations, are we considering questions' similarities or just using faithfulness as the single index?**\\n\\nFaithfulness was considered as the index for selecting examples for ICL/FT, rather than similarity, since we effectively create a dataset of faithful question-explanation pairs from which the model can learn. Interestingly, this results in more faithfulness to the original model where the examples are initially drawn from (global response). There is no guarantee that finetuning on a model\\u2019s faithful or similarly related examples would result in improved faithfulness for the finetuned model. We appreciate the suggestion on similarity, though we believe it belongs squarely in future work.\\n\\n**Q3-5: Text corrections**\\n\\nWe have updated the figure captions to reduce the requirement to browse around, thank you for the suggestion! We have also updated acronyms as suggested and fixed the mentioned typos, and will upload the new text soon once other edits are complete. We wholeheartedly appreciate the reviewer\\u2019s careful examination of our text.\\n\\nThank you once again for your insightful suggestions and comments. We are actively incorporating the above explanations to enhance the quality of our paper. We believe that we have addressed all the concerns. If there is any aspect that you feel has not been fully resolved, we would be happy to provide further information. If you are satisfied with our response, we would truly appreciate your vote of acceptance for our paper.\\n\\n[1] Wei et al. Emergent abilities of large language models. TMLR, 2022.\\n\\n[2] Li et al. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model. NeurIPS, 2023.\\n\\n[3] Goodfellow et al. Explaining and Harnessing Adversarial Examples. ICLR, 2015.\\n\\n[4] Bolukbasi et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. NeurIPS, 2016.\\n\\n[5] Adebayo et al. Sanity Checks for Saliency Maps. NeurIPS, 2018.\\n\\n[6] Jain et al. Attention is Not Explanation. ACL, 2019.\"}",
"{\"summary\": \"This paper investigates the challenge of generating faithful Chain-of-Thought reasoning in large language models, specifically focusing on approaches like in-context learning, fine-tuning, and activation editing. While the authors highlight the importance of faithfulness in CoT reasoning for trustworthiness in high-stakes domains like healthcare, their empirical results suggest that none of these methods yield significant improvements in CoT faithfulness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"(1) The topic of faithful reasoning of LLMs is interesting and sounds reasonable to investigate.\\n\\n(2) By demonstrating the limited success of conventional strategies, the paper highlights the intrinsic difficulty of faithful reasoning in LLMs, which provides a strong basis for future exploration. \\u200b\\n\\n(3) The presentation is generally clear and easy to follow.\", \"weaknesses\": \"(1) The paper evaluates standard techniques like in-context learning, fine-tuning, and activation editing to improve CoT faithfulness, but these methods have already been extensively studied in other contexts such as improving accuracy, bias reduction, and factual consistency. The paper does not present any substantial technical adaptations or theoretical contributions to these methods specifically for faithful CoT reasoning. For example, while activation editing is discussed, it largely follows the framework of existing works like Li et al. (2024) without offering any new insights. The novelty of merely applying them to faithful CoT seems limited, and the contribution does not significantly advance the field beyond the status quo.\\n\\n(2) The \\\"early answering\\\" metric used to evaluate faithfulness is based on whether truncating CoT reasoning affects the model's final output. However, the reason for taking it as the best way to measure faithfulness remains unclear, particularly given the complexity of CoT explanations. The measure seems too simplistic, as it fails to capture nuances in reasoning that may be faithful but not necessarily immediately reflected in the final answer. This could raise a misalignment between the metric and the goal of the research, which is to assess whether CoT explanations reflect the internal logic of the LLM.\\n\\n(3) Although the paper acknowledges that none of the explored methods significantly improve CoT faithfulness, it does not provide a deep analysis of why these methods fail. For example, the results show only marginal gains in faithfulness, but the paper does not dive into what specifically causes this limitation\\u2014whether it is the inherent architecture of LLMs, the quality of training data, or other factors.\\n\\n(4) While the paper claims to \\\"lay the groundwork\\\" for future research in trustworthy CoT reasoning, it does not propose concrete next steps or actionable insights based on the experimental findings. The conclusion merely restates that current methods are insufficient without suggesting innovative ideas or frameworks that could be explored in the future. This lack of direction limits the potential impact of the paper in advancing the field.\", \"questions\": \"As the authors claim that \\\"our work underscores the inherent difficulty in eliciting faithful CoT reasoning from LLMs, suggesting that the current array of approaches may not be sufficient to address this challenge\\\", I wonder what could be revealed from the evaluation about the fundamental cause of the limitation for current LLM paradigms? Further, what could be the potential way to address them?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reviewer fxK8 (2/2)\", \"comment\": \"**W3/Q1: Why is faithfulness fundamentally challenging?**\\n\\nThis is a great point, and one that we did not highlight enough in the main text, though we have thought about it extensively. We kindly invite the reviewer to read our global response. The goal of faithful Chain of Thought (CoT) reasoning involves fundamentally altering the model to generate reasoning that is more consistent with its internal decision-making processes. This represents an intrinsic change in the model's behavior, rather than simply learning a new task. When trying to finetune based on faithful CoT examples, there is no guarantee that such examples remain faithful once model parameters are adjusted. In other words, supervised learning for faithfulness poses non-stationary/moving targets. We provide a carefully designed experiment to demonstrate empirical evidence of this. As it turns out, we made findings on the fundamental challenges of finetuning for faithfulness and the limitations of ICL, in Appendices D3 and D4, but failed to prioritize their relevance correctly for the main text. We therefore thank the reviewer sincerely for bringing this to our attention.\\n\\n**Experiment:** we measure the faithfulness of the finetuned model\\u2019s CoT explanations with respect to both itself and the original model. This does not make sense in other domains with static datasets, but for faithfulness is crucially insightful, where there do not exist ground truth faithful (question, explanation) pairs. What our demonstrations strongly suggest is that pairs that are faithful w.r.t. the original model may no longer be guaranteed to be faithful w.r.t. a finetuned model, supporting our claim that **faithfulness is fundamentally difficult to target using supervised learning.**\\n\\n**Closing Remarks**\\n\\nThank you again for your fruitful feedback! We would like to invite you to further discussion, in case your concerns are still not addressed. It would be great if you can let us know if you have any additional concerns, and we will be happy to respond. Should the insights satisfy your queries on why faithfulness is difficult to learn, we would strongly appreciate your vote of acceptance.\\n\\n[1] Wei et al. Emergent abilities of large language models. TMLR, 2022.\\n\\n[2] Li et al. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model. NeurIPS, 2023.\\n\\n[3] Goodfellow et al. Explaining and Harnessing Adversarial Examples. ICLR, 2015.\\n\\n[4] Bolukbasi et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. NeurIPS, 2016.\\n\\n[5] Adebayo et al. Sanity Checks for Saliency Maps. NeurIPS, 2018.\\n\\n[6] Jain et al. Attention is Not Explanation. ACL, 2019.\"}",
"{\"summary\": \"The paper did an empirical study on whether chain of thought reasoning can be made to accurately reflect the underlying reasoning done by the LLM (i.e. whether it can be made faithful) by in-context learning, fine-tuning, or activation editing. The faithfulness measurement tries to measure whether stopping the chain of thought early would results in different outcomes compared to using the full chain to answer the question; if it does not, it is an indication that the LLM already knows the answer before generating the chain and is doing post-hoc explanation of its reasoning in the chain rather than computing the answer within the chain. The study found that in-context learning, fine-tuning, and activation editing are all not successful in substantially improving faithfulness.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provided experimental results indicating that in-context learning, fine-tuning, and activation editing did not result in substantial improvement in faithfulness of chain of thought reasoning. This suggests that other techniques may be required if this type of faithfulness is required.\", \"weaknesses\": \"The paper provides negative results -- this is fine. However, to make a strong paper, insights that are supported by evidence on why the results are negative would be helpful.\", \"questions\": \"Insights on why faithfulness is difficult to learn, either in the form of mathematical theorems, or carefully designed experiments would be helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your response. We greatly appreciate your engagement in this discussion.\\n\\n**To clarify:** we do not finetune to improve the *accuracy* of explanations, rather, we finetune the model to output explanations that are *faithful* w.r.t. the model. Faithfulness is the primary driver for the examples, to steer the model towards more faithful CoT reasoning.\\n\\nFor example, the **Stochastic Uniform (SU)** selection strategy (Section 3.2, line 245 ) works as follows:\\n1. We sample 10 CoT responses (at temperature 0.3) for every question in the training set.\\n2. For each question, we pick the most faithful CoT response.\\n3. We then finetune the model on these faithful (question, explanation) pairs.\\n\\nAs per the first table in our global response, we can observe that **faithfulness w.r.t. the original model does improve** from 0.6 to 0.65 on the AQuA dataset using this strategy. However, faithfulness w.r.t. the resulting finetuned model decreases to 0.584. We demonstrate that the fundamental limitation of finetuning for faithfulness is that, unlike accuracy, it is a property intrinsic to the model, and thus *any* loss function operating on the model's outputs alone can bear no guarantee on the new model.\\n\\nFor this reason, our work's scope is to demonstrate that existing data-driven (ICL and finetuning) and activation editing (ITI) frameworks cannot steer LLMs to automatically generate faithful explanations. Our research provides a useful datapoint for practitioners that faithfulness is not a property that can be achieved using a completely data-driven approach like supervised finetuning.\\n\\nFurthermore, we want to point out that this exploration has not been previously performed. It is unclear at first whether finetuning a model on faithful explanations would lead to reliable improvements in faithfulness, and our findings demonstrate that model faithfulness is particularly more nuanced.\\n\\nWe hope that this helps to clarify our experimentation! Please let us know if you have questions about our response. Thank you for your time.\"}",
"{\"title\": \"Response to Reviewer gNfW (1/2)\", \"comment\": \"Thank you for your thoughtful review of our paper, and for recognizing the thoroughness of our experimental evaluations. We are glad that you found the experiments to be insightful (**please see our global response** for further insights we have attained for finetuning and in-context learning). We would like to address the weaknesses you pointed out and provide clarification on the questions raised.\\n\\n**W1: Novelty of our contribution**\\n\\nWe appreciate the reviewer\\u2019s perspective on the novelty of the methodology. The key contribution of our work is to explore if popular approaches that found success in modifying LLM outputs to improve properties like accuracy [1] and truthfulness [2] can improve the faithfulness of CoT reasoning generated by LLMs. Previous seminal works [3-6] have also explored whether common strategies are effective at solving certain problems across domains. **The novelty and significance of these kinds of papers do not stem from a new approach or method** but rather the significant insight that none of the popular approaches work for this problem, and we need a new set of approaches that can address this problem. We illustrate that in-context learning, finetuning and activation editing offer limited success, and we also provide evidence as to why these methods are (fundamentally) limited in the context of faithfulness.\\n\\n**W2: AOC values are *not* calculated as a weighted sum as in Lanham et. al. (2023)**\\n\\nWe would like to correct a potential misunderstanding here. In Lanham et. al. (2023), the \\u201cweighted sum\\u201d for AOC corresponds to a) grouping all CoTs by length, b) averaging across all instances in each group to create one profile per group, c) computing AOC for each group\\u2019s profile and d) computing a \\u201cweighted sum\\u201d across all groups. In other words, as in Lanham et. al. (2023): the AOC for each CoT length is weighted by the fraction of CoT samples having that length. This average the CoT graphs for each group length and then compute AOC for each, whereas our scores represent average AOC across all CoTs (instance-level). We hope this provides clarity! We are actively updating the text to clarify this.\\n\\n**W3: Can we compare Top-K faithfulness heads and Top-K truthfulness heads?**\\n\\nThe work on truthfulness [2] is an experiment on a factual question answering dataset i.e.; TruthfulQA which contains questions like \\u201cWhat is the capital of France ?\\u201d for which there is a single correct answer in the options provided. Such tasks don\\u2019t benefit significantly from step by step reasoning. Top-K truthful heads in [2] correspond to the top attention heads whose representations are best predictors of an answer grounded in world knowledge. On the contrary, faithfulness is grounded in the model's inner workings. The exact same reasoning chain generated by two different models can have different values of faithfulness. Hence, top-K truthful heads and top-K faithful heads are different. Upon empirical evaluation, we observe the same. Following table shows the overlap between top-$K$ truthful and faithful heads for Llama-3-8B-Instruct model for varying $K$.\\n\\n| $K$ | Intersection of top-$K$ Faithful & top-$K$ Truthful heads |\\n|:----------------:|:-------------------------------------------:|\\n| 4 | 1 |\\n| 8 | 1 |\\n| 16 | 2 |\\n| 32 | 4 |\\n| 64 | 10 |\\n| 128 | 25 |\\n| 256 | 107 |\\n| 512 | 279 |\"}"
]
} |
1OkVexYLct | Revisiting the Othello World Model Hypothesis | [
"Yifei Yuan",
"Anders Søgaard"
] | \citet{li2023emergent} used the Othello board game as a test case for the ability of GPT-2 to induce world models, and were followed up by \citet{nanda-etal-2023-emergent}. We briefly discuss the original experiments, expanding them to include more language models with more comprehensive probing. Specifically, we analyze sequences of Othello board states and train the model to predict the next move based on previous moves. We evaluate seven language models (GPT-2, T5, Bart, Flan-T5, Mistral, LLaMA-2, and Qwen2.5) on the Othello task and conclude that these models not only learn to play Othello, but also induce the Othello board layout. We find that all models achieve up to 99% accuracy in unsupervised grounding and exhibit high similarity in the board features they learned. This provides considerably stronger evidence for the Othello World Model Hypothesis than previous works. | [
"Othello gaming modeling",
"feature alignment",
"LLM"
] | Reject | https://openreview.net/pdf?id=1OkVexYLct | https://openreview.net/forum?id=1OkVexYLct | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zIvyYWT8Lt",
"vFBYZMoyhQ",
"rXY5bKN5xH",
"rRxG7OLkB4",
"qFovxuI2Qv",
"o8zGGDC7tn",
"lT7kTenzNv",
"ejq4GzWfyb",
"cA8d3OhHMj",
"aVVDtJTtJE",
"J7KMo6dxkd",
"IxtUIIwKfi",
"HY2Kq23ezr",
"FM1oFHESkm",
"BbDCAQpEDo",
"7KHM8BympR",
"5dp96c5OAL",
"3Zf1r5c7gB"
],
"note_type": [
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730306945093,
1734915202297,
1732290919098,
1732332208671,
1733174601466,
1732289908890,
1732291336843,
1737523813815,
1732661110503,
1730548290278,
1732291322171,
1730799771803,
1732566294461,
1732290384218,
1732318986787,
1732291005997,
1732692195038,
1732308705930
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7059/Reviewer_duNp"
],
[
"ICLR.cc/2025/Conference/Submission7059/Area_Chair_iEgE"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Reviewer_ezgd"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Reviewer_yh19"
],
[
"ICLR.cc/2025/Conference/Submission7059/Reviewer_ezgd"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7059/Reviewer_duNp"
],
[
"ICLR.cc/2025/Conference/Submission7059/Reviewer_ezgd"
]
],
"structured_content_str": [
"{\"summary\": \"In 2023 Li et al. (and subsequently Nanda et al. (2023)) formulated the Othello World Model Hypothesis (OWRH), claiming that GPT-2, based purely on Othello move sequence analysis, was able to infer the principles of the game, including its 64-square board representation. This paper revisits OWRH with 6 Large Language Models (LLMs) and enhanced research protocol, providing stronger evidence supporting the hypothesis than the two above-cited articles.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The paper is clearly written and easy to follow.\\n2) The experiments are well-thought and lead to several new insights.\\n3) The topic should be of interest to some of the ICLR community.\", \"weaknesses\": \"1) The novelty of the paper is limited. The underlying research concept of verifying the OWMH is not new and even though the paper leads to certain new observations, they are not surprising and do not significantly expand the existing knowledge.\\n2) The selection of LLMs is somewhat outdated, since there are quite a few stronger LLMs available these days.\\n3) In the era of MLLMs (Multimodal LLMs) the rationale behind the proposed research is disputable.\", \"questions\": \"1) What qualitatively new observations related to the internal representation of Othello games in LLMs result from the presented study? What are the high-level novel implications of the presented experiments and conclusions?\\n2) Why this particular set of models has been selected? There are quite a few newer models available at the moment, both proprietary and open access.\\n3) How the presented study relates to the representation abilities of MLLMs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper suffers from somewhat unclear definitions and otherwise lack of details. It is also not super novel. Taken together, this suggests that it is not ready to be published in a selective venue - it probably needs another round of clarifications.\", \"additional_comments_on_reviewer_discussion\": \"The authors did engage productively with the reviewers, but the reviewers were not fully convinced.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"**Response to weakness**\\n> 1. The weak point of this study is to see the contribution claimed by authors as an important new contribution or extension of the previous work's claims. Although the authors tried to use multiple language models to see the difference of the modeling capability, it's not a new problem formulation because it's based on the previous works.\\n\\n With all due respect, we disagree strongly with the idea that something isn\\u2019t new because it\\u2019s based on previous work. Scientific work is always based on previous work (we stand on the shoulders of giants); yet, we made it very clear what our main contribution is (lines 49-85): We present a new evaluation method for the Othello problem; we also evaluate more models, present novel visualizations, and discuss important data limitations. The new evaluation method - quantifying the isometry between the representations of the model and the physical layout - is crucial for establishing the existence of world models, as argued for in the paper.\\n\\n> 2. It's unclear why two-hop move generation is introduced as a new benchmark problem. Authors need to explain how two-hop generation provides insights beyond one-hop prediction, or to discuss potential limitations of the one-hop approach.\\n\\nWe extended the one-hop step generation setting in the original Othello paper and investigated two-hop move generation for investigating the model\\u2019s capability to anticipate more strategic, long-term planning in Othello. While one-hop prediction focuses on the immediate next move based on the current board state, it inherently overlooks the deeper decision-making process required for gameplay strategies. Since Othello is a dynamic game where optimal play often sacrifices short-term gains for long-term advantages, the two-hop generation pushes models to simulate this high-level reasoning and provides insights on how much LLMs really understand the game strategy in the zero-shot way. *Specifically, we\\u2019ve made more clarifications in Section 3.2, 3.3 and the Limitation section in our newest version explaining the insights we expect to gain beyond one-hop prediction.*\\n\\n**Response to Questions**\\n> 1.Why is the two-hop move generation an important benchmark in the Othello World Modeling?\\n\\nPlease see my response to weakness 2.\\n\\n> 2. Could you please provide detailed analysis on the difference of each language model on the Othello world modeling? Why do they show different behaviors on the task?\\n\\nWe\\u2019ve provided some case study by showing an example move from Mistral and T5 in Figure 4. We apologize for not listing other models for page limit. **However, we add the predictions by other LLMs for the same game in Appendix.** We find that most of the time, Mistral shows good performance. It consistently demonstrates the best performance across different scenarios, effectively generating legal moves and showing a nuanced understanding of game rules. The Bart model frequently predicts adjacent tiles, leading to numerous failure cases, particularly when trained with smaller datasets. Llama-2 exhibits inconsistent performance, with a tendency to favor certain tile positions or exhibit a bias in move selection. While its predictions are often reasonable, the model appears to lack the robust policy understanding seen in Mistral, especially under constrained training conditions.\\n\\n> 3. Please discuss potential implications for particular fields or research areas that might benefit from insights into how language models learn structured world representations.\\n\\nThe induction of world models enables out-of-domain inferences and bias estimation. In addition, investigating the parallels between how language models learn structured representations and how humans internalize similar concepts can shed light on the cognitive processes underlying reasoning, strategy, and language. This could deepen our understanding of human cognition and inform theories of learning and representation. *We've added a corresponding section in our newest version to discuss the impacts.*\"}",
"{\"title\": \"Response by Authors\", \"comment\": \"> If it's not shown in the figure, I can't see it, and I also can't agree with any conclusions that are derived from it. If you do have data on this, please just include it in the figure.\\n\\nSure. We've improved the figure and add the results in the figure.\\n\\n> In the figure, it is evident that the performance of non-pretrained models, such as GPT-2 and Flan-T5, remains less changed when increasing the data size from 12k to 22k.\\n\\nWe agree with your observation that non-pretrained models achieve strong performance more quickly, whereas pretrained models exhibit slower progress, potentially due to the interference of their pretrained language representations with game understanding. To clarify, we are not suggesting that non-pretrained models perform worse than pretrained models; rather, we are highlighting the differences in their learning curves. We will revise the text to prevent any misunderstanding. Also, we apologize for not attaching the data with longer x-axis. However, we've attached it in our next version and revised the text accordingly. \\n\\n> While Figure 3 does not explicitly show plateauing for non-pretrained models\\u2014indicating instead a slow improvement near the end of the x-axis\\u2014we kindly argue that our claims about the comparative trends between pretrained and non-pretrained models remain valid. Specifically, non-pretrained models exhibit sharp, intermediate performance gains on smaller datasets, whereas pretrained models show a more gradual improvement as data size increases.\\n\\nWe would like to clarify that we do not actually mean to claim that pretrained models are better than non-pretrained one. Actually, we see that pretrained knowledge from upstream natural language tasks poses a negative impact on the othello game understanding (as stated in Line 237). What we mean here is just try to compare the curves between the two settings. However, we updated the corresponding parts in our newest version to avoid misunderstanding. \\n\\n> We apologize for our lack of precision. This should be the rules of the game - or similar. This was not yet updated in the PDF.\\n\\nWe've improved the corresponding section in our newest version. Thanks for the reminder. \\n\\n> I also still don't think this is precise enough. A game (like Othello) is not just a sequence of moves. It's a set of rules by which we can play, and any individual play is a sequence of such moves, leading to an outcome as defined by the rules. Given a sequence of moves, just saying that you predict a single next move is an ill-defined problem. There could be many different next moves. If you say that you do this for a single specific player (maybe even an optimal one), or a set of players, sure, that works. This needs to be 100% clear from the text though. And I have strong doubts (given my understanding of \\\"world model\\\") that this tests for world models.\\n\\nWe agree that understanding the game state (specifically, determining which player is currently active) is a critical aspect of game strategy. However, prior research [1,2] has already addressed this problem by training linear and non-linear probes to predict game states using trained models. **Their results demonstrate that a linear projection can achieve near-perfect accuracy in deriving the board state**[2], which they argue supports the world model theory. In our work, we did not conduct experiments specifically targeting this perspective. Instead, we focused on single-move prediction, as previous studies have already provided strong evidence for the learnability of game states. Our goal was to build upon these findings and explore how well models can predict the optimal next move, which is a complementary yet distinct challenge requiring both an understanding of the game state and strategic reasoning.\\n\\n[1] Li et al. Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task.\\n\\n[2] Nanda et al. Emergent Linear Representations in World Models of Self-Supervised Sequence Models.\\n\\n**We thank the reviewer again for stating the concerns. We've updated our paper according to the reviewer's suggestions and look forward to further discussion.**\"}",
"{\"title\": \"Response to Reviewer yh19\", \"comment\": \"We thank the reviewer once again for their valuable feedback and look forward to further discussion. We would appreciate any additional comments they may have.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"**Response to weakness**\\n> 1. The novelty of the paper is limited. The underlying research concept of verifying the OWMH is not new and even though the paper leads to certain new observations, they are not surprising and do not significantly expand the existing knowledge.\\n\\nWe respectfully disagree. We show very clearly what the limitations of previous work were, and are first to show that a world model of Othello emerges from playing Othello. Our methodology (and, as a result, evidence) is much stronger than previous works. \\n\\n> 2. The selection of LLMs is somewhat outdated, since there are quite a few stronger LLMs available these days.\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the selection of LLMs. While we recognize that newer and potentially stronger LLMs have become available, the selected models, such as GPT-2, T5, and Bart, LlaMa2, are well-documented, widely used, and readily available to the research community. By using these models, we aim to ensure our findings are easily reproducible. **We argue that the main focus of this paper should not be adopting state-of-art methods for obtaining SOTA performance, but more on investigating whether increasing model capacity could truly help understand the Othello game.** However, we\\u2019ve added some more recent LLMs (e.g. Qwen 2.5) in Table 1, 2 in our newest version and also report the performance of LLaMA 3.1 below to address the reviewer\\u2019s concern. \\n\\n\\n> 3. In the era of MLLMs (Multimodal LLMs) the rationale behind the proposed research is disputable.\\n\\nWe thank the reviewer for highlighting this important point. We acknowledge that leveraging Multimodal LLMs (MLLMs) to train models and investigate feature alignment across different modalities is a highly relevant and promising research direction. In fact, we have identified this as an essential avenue for future exploration and **have already initiated work in this area**. However, as this direction extends beyond the scope of the current paper, we have chosen to pursue it as a separate line of work. *To clarify this, we explicitly discuss it in the Future Work section of the newest version of our paper.*\\n\\n\\n\\n*Response to Questions**\\n> 1. What qualitatively new observations related to the internal representation of Othello games in LLMs result from the presented study? What are the high-level novel implications of the presented experiments and conclusions?\\n\\nKenneth Li and colleagues initially sought to demonstrate that LLMs acquire semantics rather than being \\\"syntax all the way down.\\\" However, their study is limited to small-scale language models, specifically GPT-2, leaving open several important questions. For instance, it remains unclear whether their findings generalize to larger-scale language models or how much training data is required to achieve \\\"perfect\\\" performance. Additionally, their study does not explore whether differences in model architecture could yield similar levels of game understanding. More broadly, we extend this line of inquiry by probing whether language models understand the game's strategy or merely its rules. To address this, we train models to generate sequences comprising multiple moves at a time, pushing beyond simple rule-based learning. Our experiments reveal that different language models, regardless of their architecture, exhibit high similarity in the learned features. This finding provides additional support for the Othello world model theory, suggesting that language models can internalize representations of game rules and strategies through exposure to simple game sequences.\\n\\n\\n\\n> 2.Why this particular set of models has been selected? There are quite a few newer models available at the moment, both proprietary and open access.\\n\\nPlease see our response for weakness 2.\\n\\n> 3.How the presented study relates to the representation abilities of MLLMs?\\n\\nPlease see our response for weakness 3.\"}",
"{\"title\": \"Official Comment by Authors (1)\", \"comment\": \"**Response to Questions**\\n> 1. Please define world model.\\n\\nA world model is a global theory of the world. A water with a hole in it can be a water clock, and while the bucket\\u2019s interior can be said to be in a modeling relationship with time, the bucket is not a world model. It is a model of something very local. Training language models on Othello game sequences can imply that LLMs function as a world model because it showcases their ability to learn and internalize the structured dynamics and rules of a complex system, rather than merely memorizing patterns.\\n\\n> 2. Please describe very precisely what the models are actually trained to do.\\n\\nWe apologize for any confusion regarding the objective our models are trained to achieve. To clarify, the models are trained to predict the next move in a sequence, given the preceding moves. For evaluation, we measure the proportion of predicted moves that are legal within the context of the game. This approach follows the problem setting established in previous work [1]. For example, given the sequence of previous moves \\u2018D6C6C5,\\u2019 the model is expected to predict a move like \\u2018C4\\u2019 for evaluation. We have revised the caption for Table 1 and improved the description in Section 3 to ensure clarity and avoid further misunderstandings.\\n\\n> 3. Please provide details on how the SYNTHETIC dataset was generated exactly.\\n\\nWe would like to clarify that we did not create the SYNTHETIC dataset on our own but instead utilized the existing SYNTHETIC dataset provided in [1]. As a result, we did not include extensive details about its construction in our paper, assuming that readers could refer to the original work [1] for more information. According to [1], the SYNTHETIC dataset was generated by uniformly sampling leaf nodes from the Othello game tree. This results in a data distribution that differs significantly from championship games, as it does not reflect any strategic considerations.\\n\\n*[1] Li et al. Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task.*\\n\\nWe thank the reviewer again for the suggestions. **We've created the corresponding parts in the paper and added more elaborations and experimental results concerning the problems discussed in our newest version.** We sincerely hope the reviewer can consider these revisions during the rebuttal phase and kindly reassess the overall score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Response from the Author\", \"comment\": \"Thank you for stating your concern again.\\n\\n> Given a prefix of actions from the training data, we train the model to predict the next action in the sequence. For testing, we ask the model to generate sequences of actions. Any action that is legal in the corresponding game state is counted as a correct action, and any illegal action is counted as a mistake. In other words, we do not require the model to reproduce sequences of actions from training data, or to produce strong or optimal sequences of actions, but simply sequences of legal actions. \\n\\nYes, your understanding is correct. The only thing is, in the paper, we use the term 'move' instead of 'action'. **We have revised the corresponding task description sections to make it clearer.**\\n\\n> If my understanding is correct, I do now get confused about some of your discussion around the 2-hop move prediction, and your responses to Reviewer yh19 though. You talk in a bunch of places about how this task is relevant due to the strategic reasoning involved. The end of section 3.2 talks about \\\"deeper decision-making process required for gameplay strategies\\\". The same at the end of 3.3. The Limitations section (6) discusses how the 2-hop move prediction would be challenging due to the complexity involved in optimal play, strategies, ..., and it suggests that predicting that far into the future is inherently an underdetermined task as there may be multiple optimal moves due to symmetries. Does this mean that you now actually do care about specifically predicting good actions, not just any legal action? This seems to conflict with my understanding of what you are doing as I described above.\\n\\nWe need to clarify that the initial intuition behind testing the model's 2-hop performance is to quickly evaluate whether it can learn \\\"good\\\" strategies beyond merely generating legal moves. Previous studies, as well as our experimental results in Table 1, indicate that the model achieves near \\\"perfect\\\" performance in legal move prediction when trained on large datasets. However, this raises the critical question: does the model truly understand how to play the game? And what level of understanding should we expect from from the model? To address these questions, we assess the model's ability to generate two moves consecutively (also sequences of more than two moves in our preliminary experiments). But we find it's more challenging for the model to generate more than one step, and also, the model has close to zero accuracy to generate the whole legal game sequence. So this triggers our discussion of the limitation of what LLMs really learn in Line 483.\"}",
"{\"summary\": \"This paper aims to add additional evidence to the Othello World Model Hypothesis by training a variety of different LLMs for predictive tasks in the game of Othello.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Highly topical area of research, and I think the experiments are carried out well.\", \"weaknesses\": \"Under strengths above, I wrote that I **think** experiments are carried out well. Here is my main issue with the paper: many crucial details are not described in sufficient detail, and/or are too vague. I can make reasonable guesses as to the exact work that was done, and based on this, I do genuinely believe there is good work in here. But I shouldn't have to guess, and this state is not acceptable for a research paper.\", \"i_will_elaborate_on_some_specific_points\": \"1. The entire paper revolves around the hypothesis that LLMs trained on Othello move sequences can induce a \\\"relevant world model\\\". But... I'm missing a definition of world model. I cannot judge with full certainty whether the experiments adequately support the claims, when I don't even have a crisp, clear, unambiguous definition of the hypothesis that the entire paper revolves around. I understand that \\\"world model\\\" is a relatively common phrase, but it is still crucial to define it clearly and unabmiguously.\\n2. The paper does not make it clear exactly what the models are trained to do. Combined with the lacking definition of world model above, this makes things very problematic. It is not clear to me whether the models are trained to:\\n\\n - Given current state (implied by sequence of previous moves), predict what the next played move is / should be.\\n - Given current state (implied by sequence of previous moves) and a next move, predict what the next state will be.\\n - A combination of the above, or anything else.\\n\\nThe caption of Figure 1 talks about \\\"predict the next move\\\". The caption of Table 1 is talking about \\\"game state generation\\\". These two are two very different things. Much of the rest of the paper talks about \\\"move generation\\\", which could be predicting next move again, but could also be about predicting which moves are legal, for instance.\\n\\n3. There are no details whatsoever on how the SYNTHETIC dataset was generated. Which agents were used to play these games? This requires complete details on these agents (what algorithms, how much search time if they used search, on what hardware, any kind of randomisation used to ensure variety in the data, ... we need to know everything, but now we know nothing at all).\", \"other_comments\": [\"Section 2 says that the work of Takizawa (2024) also looked at \\\"whether LLMs adopt similar ones [strategies]\\\", but as far as I can see, they did not do anything even remotely like that at all.\", \"line 159 PLMs should be LLMs?\", \"Section 3.2 refers to Tables 2 and 3, but this should be 1 and 2?\", \"Caption of Figure 2 vaguely mentions \\\"performance\\\". This is not precise enough (could be accuracy, could be error rate, would lead to very different interpretations). There's also no label on the y-axis, which also does not help in this regard.\", \"Line 263/264 talks about performance plateauing, but I don't see it as plateauing at all. Therefore, I also disagree with much of the analysis in the rest of the bottom of page 5. Sure, the decline in error rate becomes less steep at the end for the non-pretrained models. But they didn't fully plateau yet, and are **still** outperforming the Pretrained models **also at the very end of your x-axis**. These observations disagree with much of your conclusions here.\", \"Line 462/463 mentions \\\"the policy of the game\\\". There is no such thing as a \\\"the policy\\\" of any game. We can play according to many different policies.\"], \"questions\": \"1. Please define world model.\\n2. Please describe very precisely what the models are actually trained to do.\\n3. Please provide details on how the SYNTHETIC dataset was generated exactly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"**Response to weakness**\\n> 1. Section 2 says that the work of Takizawa (2024) also looked at \\\"whether LLMs adopt similar ones [strategies]\\\", but as far as I can see, they did not do anything even remotely like that at all.\\n\\nWe apologize for the mistake. What we want to mean here is that \\u2018paving the way for future research to explore whether LLMs adopt similar approaches\\u2019. We\\u2019ve made the correction in our newest version. \\n\\n> 2.line 159 PLMs should be LLMs?\\n\\nWe use the term \\u2018PLMs\\u2019 to refer to \\u2018pretrained language models,\\u2019 representing smaller-scale models such as GPT-2 and T5. This distinction is made to differentiate them from large language models (LLMs) like LLaMA and Flan-T5.\\n\\n> 3.Section 3.2 refers to Tables 2 and 3, but this should be 1 and 2?\\n\\nThank you for pointing the typo out. We\\u2019ve corrected this typo in our newest version.\\n\\n> 4.Caption of Figure 2 vaguely mentions \\\"performance\\\". This is not precise enough (could be accuracy, could be error rate, would lead to very different interpretations). There's also no label on the y-axis, which also does not help in this regard.\\n\\nWe apologize for not making this clear. Same as Table 2, the performance refers to error rate performance. We\\u2019ve changed the caption in our newest version.\\n\\n> 5.Line 263/264 talks about performance plateauing, but I don't see it as plateauing at all. Therefore, I also disagree with much of the analysis in the rest of the bottom of page 5. Sure, the decline in error rate becomes less steep at the end for the non-pretrained models. But they didn't fully plateau yet, and are still outperforming the Pretrained models also at the very end of your x-axis. These observations disagree with much of your conclusions here.\\n\\nWe clarify that non-pretrained models plateau in performance when the data size increases from 22k to 27k and 32k, which is not shown in the figure. In the figure, it is evident that the performance of non-pretrained models, such as GPT-2 and Flan-T5, remains less changed when increasing the data size from 12k to 22k. We have updated the corresponding analysis text to make it clearer. While Figure 3 does not explicitly show plateauing for non-pretrained models\\u2014indicating instead a slow improvement near the end of the x-axis\\u2014we kindly argue that our claims about the comparative trends between pretrained and non-pretrained models remain valid. Specifically, non-pretrained models exhibit sharp, intermediate performance gains on smaller datasets, whereas pretrained models show a more gradual improvement as data size increases.\\n\\n> 6.Line 462/463 mentions \\\"the policy of the game\\\". There is no such thing as a \\\"the policy\\\" of any game. We can play according to many different policies.\\n\\nWe apologize for our lack of precision. This should be the rules of the game - or similar.\"}",
"{\"summary\": \"In this paper, authors evaluate the Othello World Model hypothesis using different types of language models. This study is based on the previous works Li et al. (2023) and Nanda et al. (2023). The goal of this study is to reevaluate the hypothesis over multiple language models and see common representations they learnt.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This work is based on previous studies on Othello World Model Hypothesis. It's an interesting study because it tries to see if language models can model the rules of the Othello game from a large amount of transcripts data. Although the hypothesis has been probed in the previous studies, authors propose to reevaluate the hypothesis with more language models and different settings.\\n\\nFrom the reevaluation, authors provide more evidence on the hypothesis and try to provide cross-language model latent representation on the Othello World model.\\n\\nAs a result, the paper could support the previous work's claims with new evidence.\", \"weaknesses\": \"The weak point of this study is to see the contribution claimed by authors as an important new contribution or extension of the previous work's claims. Although the authors tried to use multiple language models to see the difference of the modeling capability, it's not a new problem formulation because it's based on the previous works.\\n\\nIt's unclear\\u00a0why two-hop move generation is introduced as a new benchmark problem. Authors need to explain how two-hop generation provides insights beyond one-hop prediction, or to discuss potential limitations of the one-hop approach.\", \"questions\": [\"Why is the two-hop move generation an important benchmark in the Othello World Modeling?\", \"Could you please provide detailed analysis on the difference of each language model on the Othello world modeling? Why do they show different behaviors on the task?\", \"Please discuss potential implications for particular fields or research areas that might benefit from insights into how language models learn structured world representations.\", \"What is the reason to revisit this hypothesis using more language models and comprehensive probings? Is the previous work not enough to show the hypothesis's validity?\", \"*Please discuss specific types of problems or domains where your approach might be applicable, and what challenges you anticipate in extending beyond Othello.\", \"Could you show that the Othello world model encodes the rules of Othello (to determine the validity of moves) or strategy of game playing?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Unfortunately, I continue to be worried about the clarity of writing in this paper, and the lack of precision.\\n\\n> We apologize for any confusion regarding the objective our models are trained to achieve. To clarify, the models are trained to predict the next move in a sequence, given the preceding moves. For evaluation, we measure the proportion of predicted moves that are legal within the context of the game. This approach follows the problem setting established in previous work [1]. For example, given the sequence of previous moves \\u2018D6C6C5,\\u2019 the model is expected to predict a move like \\u2018C4\\u2019 for evaluation. We have revised the caption for Table 1 and improved the description in Section 3 to ensure clarity and avoid further misunderstandings.\\n\\nI think I understand what you are doing, based on a combination of this response with the paper. I'm still actually not 100% sure though, which is concerning, as this should not be a complicated story at all. It should be possible to make this very precise, clear, unambiguous, and at the same time easy to read, written in plain language. Based on my current understanding, it could go something like: *Given a prefix of actions from the training data, we train the model to predict the next action in the sequence. For testing, we ask the model to generate sequences of actions. Any action that is legal in the corresponding game state is counted as a correct action, and any illegal action is counted as a mistake. In other words, we do not require the model to reproduce sequences of actions from training data, or to produce strong or optimal sequences of actions, but simply sequences of legal actions.*\\n\\nThe above is my current understanding of what you are doing. Please correct if wrong, but, regardless of whether I understood correctly or not, please think about how you can describe what you are doing much more precisely, in plain language.\\n\\nIf my understanding is correct, I do now get confused about some of your discussion around the 2-hop move prediction, and your responses to Reviewer yh19 though. You talk in a bunch of places about how this task is relevant due to the strategic reasoning involved. The end of section 3.2 talks about \\\"deeper decision-making process required for gameplay strategies\\\". The same at the end of 3.3. The Limitations section (6) discusses how the 2-hop move prediction would be challenging due to the complexity involved in optimal play, strategies, ..., and it suggests that predicting that far into the future is inherently an underdetermined task as there may be multiple optimal moves due to symmetries. Does this mean that you now actually do care about specifically predicting *good* actions, not just any legal action? This seems to conflict with my understanding of what you are doing as I described above.\"}",
"{\"title\": \"Official Comment by Authors (1)\", \"comment\": \"*Table 1: 1-hop game move generation error rate of different models. (c) denotes the CHAMPIONSHIP data and (s) denotes the SYNTHETIC data.*\\n| Method | 2k (c) | 20k (c) | 200k (c) | | 2k (s) | 20k (s) | 200k (s) | 2M (s) | full (s) | \\n|-------------|-------|-------|-------|------|-------|-------|------|-------|-------|\\n|Qwen 2.5 (non-pretrained) | 25.2|17.3|5.5||45.9|37.8|20.1|9.2|<0.1|\\n|LlaMa 3.1 (non-pretrained) |21.7|9.3|3.8| |37.1|25.5|13.9|8.2|<0.1|\\n|-------------|-------|-------|-------|------|-------|-------|------|-------|-------|\\n|Qwen 2.5 (pretrained) | 20.9|18.2|6.0||46.5 |39.3|23.4|10.8|<0.1|\\n|LlaMa 3.1 (pretrained) |23.8|11.2|4.1||39.3|26.6|21.5|8.9|<0.1|\\n\\n*Table 2: 2-hop game move generation error rate of different models.*\\n| Method | 2k (c) | 20k (c) | 200k (c) | | 2k (s) | 20k (s) | 200k (s) | 2M (s) | full (s) | \\n|-------------|-------|-------|-------|------|-------|-------|------|-------|-------|\\n|Qwen 2.5 (non-pretrained) | 55.9|25.4|22.8||77.6|65.3|44.2|28.7|3.3|\\n|LlaMa 3.1 (nonpretrained) |50.6|22.7|21.2||73.4|64.8|44.0|26.9|3.0|\\n|-------------|-------|-------|-------|------|-------|-------|------|-------|-------|\\n|Qwen 2.5 (pretrained) | 63.1|38.4|25.8||79.3 |65.3|45.1|36.0|3.9|\\n|LlaMa 3.1 (pretrained) | 58.2|34.1|25.5||82.6|75.4|45.8|34.9|4.0|\\n\\nWe thank the reviewer again for the suggestions. We've added more elaborations and experimental results concerning the problems discussed in our newest version. We sincerely hope the reviewer can consider these revisions during the rebuttal phase and kindly reassess the overall score.\"}",
"{\"title\": \"Note on world models\", \"comment\": \"A world model is a representation or a map of a world, i.e., ideally, a homomorphism. We could have been more explicit about this, but this is the standard interpretation of 'world model' in the LLM understanding debate. This should also be clear from the fact that we say our (ideal) world model is the Othello board layout: The world model we evaluate for is a map of an Othello board. In lines 291-2, we say: 'To validate the Othello World Model Hypothesis, we directly evaluate the internal representation of\\nthe Othello board in language models.' For the evaluation of world models, we check the cosine distance under a Procrustes analysis (see \\u00a74.2). Since a homomorphism is invariant under linear projection, this directly evaluates whether our candidate world model is indeed a map of the world (the Othello board).\\n\\n**We've added a corresponding section in Appendix A, in case readers are not familiar with this area.**\"}",
"{\"title\": \"Official Comment by Authors (1)\", \"comment\": \"> 4. What is the reason to revisit this hypothesis using more language models and comprehensive probings? Is the previous work not enough to show the hypothesis's validity? *Please discuss specific types of problems or domains where your approach might be applicable, and what challenges you anticipate in extending beyond Othello.\\n\\nAs stated in the paper, this is exactly our motivation: Previous work was not enough to show that Othello training induces a board model. As it's limited in one small-scaled model, GPT-2. This leaves several important questions. For instance, it remains unclear whether their findings generalize to larger-scale language models or how much training data is required to achieve \\\"perfect\\\" performance. Additionally, their study does not explore whether differences in model architecture could yield similar levels of game understanding. More broadly, we extend this line of inquiry by probing whether language models understand the game's strategy or merely its rules. To address this, we train models to generate sequences comprising multiple moves at a time, pushing beyond simple rule-based learning. Our experiments reveal that different language models, regardless of their architecture, exhibit high similarity in the learned features. This finding provides additional support for the Othello world model theory, suggesting that language models can internalize representations of game rules and strategies through exposure to simple game sequences. *We've added a future work section and the potential impact section to discuss this.*\\n\\n> 5. Could you show that the Othello world model encodes the rules of Othello (to determine the validity of moves) or strategy of game playing?\\n\\nWe demonstrate that the Othello model encodes the rules of Othello\\u2014potentially forming a \\\"world model\\\"\\u2014through several key findings: 1. **Previous Evidence of Rule Learning**: Prior studies [1,2] have shown that GPT-2 can acquire game rules using accuracy evaluations and linear/non-linear probing methods. Expanding on this, we analyze a broader range of LLMs and find that all models, whether employing an encoder-decoder or decoder-only architecture, achieve strong performance in generating legal moves when trained on extensive game data. 2. **Training-Free Probing with Feature Alignment**: To further explore rule learning, we adopt a training-free probing approach. Using a feature alignment algorithm originally developed for multilingual word alignment, we investigate the similarity of features learned across different models. Results in Table 3 and Figure 4 reveal a consistent pattern of feature distributions, suggesting that diverse models converge on similar representations when trained with Othello game moves. 3. **Latent Move Projections and Physical Position Knowledge**: To assess the extent of rule learning, we analyze latent move projections in Figure 6. These projections show that all legal moves consistently receive high probabilities, while tiles in close physical proximity exhibit high similarity scores. This surprising finding provides robust evidence that LLMs can internalize game policies and develop an understanding of spatial relationships, even without explicit training for this purpose.These results collectively highlight that LLMs are capable of encoding Othello's rules and strategies, offering insights into their potential to form structured representations of complex systems.\\n\\n----------\\n[1] Li et al. Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task. \\n\\n[2] Nanda et al. Emergent Linear Representations in World Models of Self-Supervised Sequence Models.\\n\\nWe thank the reviewer again for the suggestions. We've added more elaborations and experimental results concerning the problems discussed in our newest version. We sincerely hope the reviewer can consider these revisions during the rebuttal phase and kindly reassess the overall score.\"}",
"{\"comment\": \"I'd like to thank the authors for the rebuttal, which cleared up some of my doubts, though I still believe the contribution is incremental.\\n\\nI'll keep my original rating.\"}",
"{\"comment\": \"Thank you for your responses.\\n\\n---\\n\\n**On the discussions of Figure 3:**\\n\\n> We clarify that non-pretrained models plateau in performance when the data size increases from 22k to 27k and 32k, which is not shown in the figure. \\n\\nIf it's not shown in the figure, I can't see it, and I also can't agree with any conclusions that are derived from it. If you do have data on this, please just include it in the figure.\\n\\n> In the figure, it is evident that the performance of non-pretrained models, such as GPT-2 and Flan-T5, remains less changed when increasing the data size from 12k to 22k. \\n\\nYes, but rate of change in isolation is not important, not when the starting points are so different. The way I read the data in these figures is: the non-pretrained models already achieve good performance (possibly getting close to saturating?) much more quickly. They simply become slower later on, because they are already close to as good as they can be. The pretrained models only show higher rates of changes in the later changes, because they are simply slower throughout the whole trajectory, so they can look \\\"faster\\\" in the later stages because they still have to catch up and didn't saturate yet. You currently stop the $x$-axis right before the point that we actually need to see.\\n\\n> While Figure 3 does not explicitly show plateauing for non-pretrained models\\u2014indicating instead a slow improvement near the end of the x-axis\\u2014we kindly argue that our claims about the comparative trends between pretrained and non-pretrained models remain valid. Specifically, non-pretrained models exhibit sharp, intermediate performance gains on smaller datasets, whereas pretrained models show a more gradual improvement as data size increases.\\n\\nI still don't agree, or at least not with the tone of phrasing. I see not a single point on the $x$-axis in your figure where pretrained models are better than non-pretrained ones, yet much of the text in the paper makes the story sound positive for the pretrained models.\\n\\n---\\n\\n> We apologize for our lack of precision. This should be the rules of the game - or similar.\\n\\nThis was not yet updated in the PDF.\\n\\n---\\n\\n> A world model is a global theory of the world. A water with a hole in it can be a water clock, and while the bucket\\u2019s interior can be said to be in a modeling relationship with time, the bucket is not a world model. It is a model of something very local. Training language models on Othello game sequences can imply that LLMs function as a world model because it showcases their ability to learn and internalize the structured dynamics and rules of a complex system, rather than merely memorizing patterns.\\n\\nI was not just looking for a definition of \\\"world model\\\" in a response to me on OpenReview, but strongly feel that it should be in the paper. If your enter paper revolves around investigating a \\\"world model hypothesis\\\", it makes no sense to me not to have a 100% clear and unambiguous definition of what that actually means, right in the paper. \\n\\nI also feel that it could still be much more precise. \\\"X is a global theory of the world\\\". That's not precise to me. What does this mean? From the rest of your response here, I understand it as something like: \\\"A world model is a function that, given a state, can tell me what all the legal actions are, and given a state plus a legal action, tell me what the next state and the immediate reward will be.\\\" That would be a precise definition (don't know if it would be correct?).\\n\\nOf course, a **crucial follow-up question** will then be: do your experiments actually test for the definition. I doubt that they do, at least not for the definition I have come up with.\\n\\n> We apologize for any confusion regarding the objective our models are trained to achieve. To clarify, the models are trained to predict the next move in a sequence, given the preceding moves. For evaluation, we measure the proportion of predicted moves that are legal within the context of the game. This approach follows the problem setting established in previous work [1]. For example, given the sequence of previous moves \\u2018D6C6C5,\\u2019 the model is expected to predict a move like \\u2018C4\\u2019 for evaluation. We have revised the caption for Table 1 and improved the description in Section 3 to ensure clarity and avoid further misunderstandings.\\n\\nI also still don't think this is precise enough. A game (like Othello) is not just a sequence of moves. It's a set of rules by which we can play, and any individual play is a sequence of such moves, leading to an outcome as defined by the rules. Given a sequence of moves, just saying that you predict a single next move is an ill-defined problem. There could be many different next moves. If you say that you do this **for a single specific player** (maybe even an optimal one), or a set of players, sure, that works. This needs to be 100% clear from the text though. And I have strong doubts (given my understanding of \\\"world model\\\") that this tests for world models.\"}"
]
} |
1Ogw1SHY3p | Monet: Mixture of Monosemantic Experts for Transformers | [
"Jungwoo Park",
"Ahn Young Jin",
"Kee-Eung Kim",
"Jaewoo Kang"
] | Understanding the internal computations of large language models (LLMs) is crucial for aligning them with human values and preventing undesirable behaviors like toxic content generation. However, mechanistic interpretability is hindered by *polysemanticity*—where individual neurons respond to multiple, unrelated concepts. While Sparse Autoencoders (SAEs) have attempted to disentangle these features through sparse dictionary learning, they have compromised LLM performance due to reliance on post-hoc reconstruction loss. To address this issue, we introduce **Mixture of Monosemantic Experts for Transformers (Monet)** architecture, which incorporates sparse dictionary learning directly into end-to-end Mixture-of-Experts pretraining. Our novel expert decomposition method enables scaling the expert count to 262,144 per layer while total parameters scale proportionally to the square root of the number of experts. Our analyses demonstrate mutual exclusivity of knowledge across experts and showcase the parametric knowledge encapsulated within individual experts. Moreover, **Monet** allows knowledge manipulation over domains, languages, and toxicity mitigation without degrading general performance. Our pursuit of transparent LLMs highlights the potential of scaling expert counts to enhance mechanistic interpretability and directly resect the internal knowledge to fundamentally adjust model behavior. | [
"large language models",
"mechanistic interpretability",
"monosemanticity",
"mixture of experts",
"knowledge unlearning"
] | Accept (Poster) | https://openreview.net/pdf?id=1Ogw1SHY3p | https://openreview.net/forum?id=1Ogw1SHY3p | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vK4a7CffjB",
"v99iozG8uU",
"sYdsiAHWt6",
"sC6qvXoUN2",
"qfEuk0jafR",
"oiBN59wVrj",
"nXREl6iN7C",
"jSWN2Lldpw",
"hp39cIvBPV",
"gXm6nZ8R5r",
"eCLX0Y4vKH",
"csG0SYWslq",
"cFaEpqRVTF",
"YKX5HXjUlz",
"WKNrqT1kAp",
"L6HZ7I1cHb",
"Ku3lViIwpM",
"K0FZZBwqNi",
"JgTCqc02dT",
"IF6FODz0vP",
"HacpIR2W7X",
"ET6S71zlni",
"DrZv0xbJx1",
"7foltFIvgS",
"6qXKtFUD5N",
"5fJzkqVKHT",
"3zuyDpdc8G"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review"
],
"note_created": [
1730660658574,
1732582195719,
1732226099993,
1732227527119,
1732228057472,
1732225881000,
1732224356141,
1733070269393,
1732647545516,
1732224248829,
1732223351758,
1732227591711,
1732809940240,
1732796701144,
1732646773618,
1730425792151,
1733069690846,
1737524002458,
1732226453122,
1730146980469,
1732539114474,
1732553956316,
1732724779007,
1732390808578,
1732732675840,
1734788360095,
1730681155409
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_53hd"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_oJuH"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_sHPn"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_sHPn"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_YJRi"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_YJRi"
],
[
"ICLR.cc/2025/Conference/Submission9736/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_YJRi"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_oJuH"
],
[
"ICLR.cc/2025/Conference/Submission9736/Area_Chair_aByw"
],
[
"ICLR.cc/2025/Conference/Submission9736/Reviewer_oJuH"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces the use of Mixture of Experts as a way to have more interpretable models in the context of polysemanticity. They change the standard MoE architecture in that they use product key retrieval technique as a router and they have experts associated with each key. They consider two strategies to create the model: horizontal expert decomposition and vertical expert decomposition, and finally explain how to train their models (Section 3). In the experiments section (Section 4), they show that the experts display monosemanticity and that removing some experts from some domain yields significant performance degradation (Sections 5.1 and 5.2). The Monet approach also allows to purge toxic experts from the model, which is interesting from a safety perspective.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"I like the idea of the paper. Some earlier works noticed that experts display some monosemanticity [1,2] and it is great to see this work push this idea. I also think that the set of experiments is very convincing and I believe that this work may be influential for getting more interpretable neural networks.\\n\\n[1] Fedus, William, Barret Zoph, and Noam Shazeer. \\\"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.\\\" Journal of Machine Learning Research 23.120 (2022): 1-39.\\n\\n[2] Fedus, William, Jeff Dean, and Barret Zoph. \\\"A review of sparse expert models in deep learning.\\\" arXiv preprint arXiv:2209.01667 (2022).\", \"weaknesses\": \"I think the main weakness of the paper is the presentation + writing, especially in Section 3. I am happy to consider improving my score if much better explanations of the method are given in Section 3.\\n\\n- **Section 3 should be more clear (especially the Horizontal and Vertical decomposition)**: I read the work by Lample et al. [1] for completing this review and according to my understanding, there is a unique $(u_i, v_i)$ that is associated with each key. Their approach makes sense to me.\\n\\n -- I am very confused why there is the mix and match (along the horizontal or the vertical) in this paper. And also, why is there any memory savings (compared to the PEER approach)? And why is each expert of dimension m (while in PEER, it is a single neuron). \\n\\n\\n -- I also recommend the authors to do a complexity calculation like in [1], Section 3.2 to be fully transparent on the memory/computation complexities. \\n\\n -- I also didn\\u2019t find Figure 1 very clear, for instance it was not clear what \\u201cTop\\u201d, \\u201cbottom\\u201d or \\u201cTL\\u201d, \\u201cBL\\u201d refer to. Above all, I think that this drawing should be improved.\\n\\n\\n- **Lack of baselines**: It is also not clear to me that a whole new architecture is needed to ensure a more interpretable model. For instance, [2,3] showed that standard MoEs display monosemanticity behaviors. Therefore, I think it is important to maybe compare the Monet method with standard MoEs. Would for instance fine-grained MoEs [4] work in this case? Is it the fact that we have a lot of experts that is responsible for more \\u201cmonosemantic\\u201d experts? Or the routing strategy is responsible for it? I just want to be convinced that no simpler architecture would lead to the results obtained in Section 4.\\n\\n\\n\\n[1] Lample, Guillaume, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\\u00e9 J\\u00e9gou. \\\"Large memory layers with product keys.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[2] Fedus, William, Barret Zoph, and Noam Shazeer. \\\"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.\\\" Journal of Machine Learning Research 23.120 (2022): 1-39.\\n\\n[3] Fedus, William, Jeff Dean, and Barret Zoph. \\\"A review of sparse expert models in deep learning.\\\" arXiv preprint arXiv:2209.01667 (2022).\\n\\n[4] Krajewski, Jakub, Jan Ludziejewski, Kamil Adamczewski, Maciej Pi\\u00f3ro, Micha\\u0142 Krutul, Szymon Antoniak, Kamil Ciebiera et al. \\\"Scaling laws for fine-grained mixture of experts.\\\" arXiv preprint arXiv:2402.07871 (2024).\", \"questions\": \"I listed my question in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your valuable feedback\", \"comment\": \"We appreciate your support and are grateful for your endorsement of our paper\\u2019s acceptance. Thank you.\"}",
"{\"title\": \"Response to Reviewer 53hd (Part 2/2)\", \"comment\": \"> Lack of baselines\\\\\\n**Q1**. I think it is important to maybe compare the Monet method with standard MoEs\\\\\\n**Q2**. Would for instance fine-grained MoEs work in this case?\\\\\\n**Q3**. Is it the fact that we have a lot of experts that is responsible for more \\u201cmonosemantic\\u201d experts?\\n> \\n\\nFollowing your request of a fine-grained SMoE interpretability baseline in **Q1** and **Q2**, we have included knowledge unlearning of OLMoE [3]. OLMoE LLM with total 6.9B parameters has been selected as the representative baseline for conventional SMoE architectures for two reasons: (1) it has the largest number of experts among the publicly available SMoE LLMs [3-5] and (2) it has been trained with an extensive amount of tokens from various sources.\\n\\n**Monet-VD 1.4B\\u2019s Domain Masking Performance Perturbation in MMLU**\\n\\n||biology|business|chemistry|compsci|economics|engineering|health|history|law|math|other|philosophy|physics|psychology|\\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|$\\\\Delta$ Target Domain|-4.66|-4.61|-5.49|-1.05|-2.32|-4.14|-3.21|-2.14|-0.81|-3.1|-0.37|-1.5|-1.2|-2.59|\\n|$\\\\Delta$ Avg. Other Domains|-0.42|-0.05|-0.28|-0.51|-0.08|-0.06|0.04|-0.21|-0.2|0.03|-0.02|-0.24|-0.28|-0.21|\\n|$\\\\Delta$ Std. Other Domains|0.52|0.9|0.93|0.74|0.69|0.66|0.67|0.57|0.66|0.79|0.7|0.71|0.81|0.61|\\n\\n- Mean of $\\\\Delta$ Target: -2.65\\n- Mean of $\\\\Delta$ Avg. Other: -0.18\\n- Mean of $\\\\Delta$ Std. Other: 0.71\\n\\n**OLMoE 6.9B\\u2019s Domain Masking Performance Perturbation in MMLU**\\n\\n||biology|business|chemistry|compsci|economics|engineering|health|history|law|math|other|philosophy|physics|psychology|\\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|$\\\\Delta$ Target Domain|-1.74|-5.89|-4.46|-9.47|-3.68|-6.9|-4.55|-8.62|-7.98|-6.56|-0.62|-4.74|-2.72|-0.86|\\n|$\\\\Delta$ Avg. Other Domains|-1.33|-2.86|-3.08|-0.4|-1.51|-4.29|-1.67|-3.8|-5|-3.22|-0.27|-1.91|-0.96|-0.66|\\n|$\\\\Delta$ Std. Other Domains|1.3|1.78|2.04|1.18|1.62|2.38|2.08|2.15|2.22|2.51|1.11|1.49|1.55|0.68|\\n\\n- Mean of $\\\\Delta$ Target: -4.91\\n- Mean of $\\\\Delta$ Avg. Other: -2.21\\n- Mean of $\\\\Delta$ Std. Other: 1.72\\n\\nOur additional experimentations suggest that OLMoE may be constituted with polysemantic experts. Results can be summarized as the following:\\n\\n1. In OLMoE, there were extremely few specialized experts for MMLU, based on our criteria of skewness in expert routing score. In the case of Monet, we identified specialized experts if its highest routing score on a particular domain is twice as much as that of the second highest domain. However, OLMoE\\u2019s experts\\u2019 routing score was evenly distributed, making it difficult to detect specialized experts. We leveraged criteria of occurrences in maximum activation to determine the expert\\u2019s domain specialization to obtain the results.\\n2. OLMoE\\u2019s accuracy drop in other domains was significant in unlearning, possibly due to the entangled characteristics of experts since their specializations were only detectable with argmax criteria.\\n3. We measured delta performances\\u2019 mean standard deviation of the other 13 domains, resulting in 0.7 for Monet and 1.7 for OLMoE, differing twice as much, showing disparity in stability of knowledge conservation during unlearning.\\n\\nWe believe that such results suggest that for most of the SMoE architectures [3-6] with 64 experts or less, the expert count is too small to disentangle polysemanticity. Our architecture, on the other hand, has 262,144 experts available, which we believe enable fine-grained specialization, resulting in monosemantic experts that capture mutually exclusive aspects of knowledge. To further address your inquiry of **Q3**, we provide an overview of unlearning results of Monet, Gemma Scope, OLMoE, and LLaMa in `Figure 3` in our revised paper.\\n\\nWe sincerely appreciate your thorough review and valuable suggestions, which have helped strengthen our manuscript substantially. We remain available to address any additional questions or concerns you may have.\\n\\n[3] Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, et al. OLMoE: Open Mixture-of-Experts Language Models. arXiv preprint arXiv:2409.02060, 2024.\\\\\\n[4] Yikang Shen, Zhen Guo, Tianle Cai, and Zengyi Qin. JetMoE: Reaching Llama2 Performance with 0.1M Dollars. arXiv preprint arXiv:2404.07413, 2024.\\\\\\n[5] Damai Dai, Chengqi Deng, Chenggang Zhao, R.x. Xu, Huazuo Gao, et al. DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1280\\u20131297, August 2024.\\\\\\n[6] Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski, Maciej Pi\\u00f3ro, Micha\\u0142 Krutul, et al. Scaling Laws for Fine-Grained Mixture of Experts. In ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2024.\"}",
"{\"title\": \"Response to Reviewer YJRi (Part 1/3)\", \"comment\": \"We appreciate the comments about the method and are sorry about the confusion.\\n\\n> **Q1**. Presentation can be greatly improved. \\\\\\n**Q2**. Which part of the changes over PEER make it superior? \\\\\\n**Q3**. Incremental proposal on top of PEER, I am uncertain how significant the contributions are.\\n> \\n\\n**Please refer to our updated manuscript**, where we have improved the readability in sections `2. Preliminaries` to through `3. Monet`. Based on your feedback **Q1**, we have also enhanced the presentations of `Figure 1` in the revision.\", \"to_summarize_sections_2_and_3\": \"1. Inspired by the product key algorithm [1], PEER [2] processes up to a million experts with product key retrieval. \\n2. Despite its computational efficiency, PEER requires to initialize and store $N$ standalone experts, resulting in memory usage that grows linearly with the number of experts, $O(N)$.\\n3. In response to **Q2**, our contribution is partitioning the expert\\u2019s MLP network into two different groups of segments and storing them within $O(\\\\sqrt{N})$ memory constraint. During the training or inference, the learned router dynamically composes expert networks to form $N$ combinations of experts.\", \"below_is_a_comparison_of_time_complexity_for_expert_retrieval_and_space_complexity_for_expert_parameters\": \"| **Model** | **Time Complexity** | **Space Complexity** |\\n| --- | --- | --- |\\n| SMoE | $O(Nd)$ | $O(Nmd)$ |\\n| PEER | $O((\\\\sqrt{N} + k^2)Hd)$ | $O(Nd)$ |\\n| Monet | $O(\\\\sqrt{N}Hd)$ | **$O(\\\\sqrt{N}md)$** |\\n\\nwhere $d$ is the hidden dimension of the expert, $m$ is the dimension of the individual expert, $k$ is the TopK hyperparameter, and $H$ denotes multi-head of the router. \\n\\nRegarding **Q3**, we suggest that our contribution is significant considering that our product key composition has optimized space complexity while maintaining the time complexity of PEER.\"}",
"{\"title\": \"Response to Reviewer YJRi (Part 3/3)\", \"comment\": \"> Can we mix Horizontal Expert Decomposition and vertical expert decomposition?\\n> \\n\\nThank you for your suggestions on additional experiments where two orthogonal decomposition methods can be mixed and complement each other. The results are presented as below:\\n\\n**Summary of 8 open-ended LLM benchmarks**\\n| | Avg. Performance (0-shot) | Avg. Performance (5-shot) |\\n| :---: | :---: | :---: |\\n| Horizontal Decomposition (HD) | 0.463 | 0.487 |\\n| Vertical Decomposition (VD) | 0.478 | 0.510 |\\n| Complementary Mix (HD + VD) | 0.470 | 0.503 |\\n\\n\\n**Details of 8 open-ended LLM benchmarks**\\n| | MMLU | ARC | WG | PIQA | SIQA | OBQA | HellaSwag | CSQA | Avg |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| 0-shot | | | | | | | | | |\\n| Horizontal Decomposition (HD) | 0.338 | 0.471 | 0.538 | 0.714 | 0.418 | 0.382 | 0.501 | 0.339 | 0.463 |\\n| Vertical Decomposition (VD) | 0.352 | 0.495 | 0.522 | 0.727 | 0.423 | 0.418 | 0.529 | 0.363 | 0.478 |\\n| Complementary Mix (HD + VD) | 0.338 | 0.504 | 0.541 | 0.726 | 0.403 | 0.382 | 0.521 | 0.349 | 0.470 |\\n| 5-shot | | | | | | | | | |\\n| Horizontal Decomposition (HD) | 0.352 | 0.544 | 0.530 | 0.720 | 0.432 | 0.360 | 0.518 | 0.441 | 0.487 |\\n| Vertical Decomposition (VD) | 0.360 | 0.547 | 0.526 | 0.730 | 0.441 | 0.422 | 0.551 | 0.501 | 0.510 |\\n| Complementary Mix (HD + VD) | 0.355 | 0.567 | 0.541 | 0.717 | 0.437 | 0.384 | 0.537 | 0.489 | 0.503 |\\n\\n> **Q1**. citation to PEER is missing.\\\\\\n**Q2**. No proper ablations to study different choices in the architectural design and no insight is provided. \\\\\\n**Q3**. What does the model start with in table 3?\\n> \\n\\nRespectfully, we would like to correct that\\n\\n- **A1**. A citation to PEER was already present in our `1. Introduction` section.\\n- **A2**. An ablation study on auxiliary loss weights was present in `Appendix Section C.1`, where orthogonal architectural design choices have been rigorously compared in `Section 3` across model sizes and benchmarks.\\n- **A3**. `Table 3`\\u2019s full performance was also present in `Appendix Section E`.\\n\\nWe understand that such a misconception is due to a lack of space in the paper where a fraction of the information had to be moved to the appendix. We would graciously ask you to read our revised manuscript if you could spare your invaluable time. Thank you once again.\\n\\n[1] Guillaume Lample, Alexandre Sablayrolles, Marc\\u2019Aurelio Ranzato, Ludovic Denoyer, and Herv\\u00e9 J\\u00e9gou. Large Memory Layers with Product Keys. In Advances in Neural Information Processing Systems, volume 32, 2019.\\\\\\n[2] Xu Owen He. Mixture of a million experts. arXiv preprint arXiv:2407.04153, 2024\\\\\\n[3] Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, et al. OLMoE: Open Mixture-of-Experts Language Models. arXiv preprint arXiv:2409.02060, 2024.\\\\\\n[4] Yikang Shen, Zhen Guo, Tianle Cai, and Zengyi Qin. JetMoE: Reaching Llama2 Performance with 0.1M Dollars. arXiv preprint arXiv:2404.07413, 2024.\\\\\\n[5] Damai Dai, Chengqi Deng, Chenggang Zhao, R.x. Xu, Huazuo Gao, et al. DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1280\\u20131297, August 2024.\\\\\\n[6] Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski, Maciej Pi\\u00f3ro, Micha\\u0142 Krutul, et al. Scaling Laws for Fine-Grained Mixture of Experts. In ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2024.\"}",
"{\"title\": \"Response to Reviewer 53hd (Part 1/2)\", \"comment\": \"We would like to express our gratitude for your positive feedback on our paper's idea and the effort you invested in its assessment. In the following response, we will address each of the weaknesses and questions you have raised.\\n\\n> Section 3 should be more clear.\\\\\\n**Q1**. Why is there any memory savings (compared to the PEER approach)?\\\\\\n**Q2**. Why is each expert of dimension m (while in PEER, it is a single neuron)?\\\\\\n**Q3**. I also recommend the authors to do a complexity calculation, Section 3.2 to be fully transparent on the memory/computation complexities.\\\\\\n**Q4**. I think that this drawing should be improved.\\n> \\n\\n**Please refer to our updated manuscript**, where we have improved the readability in sections `2. Preliminaries` to through `3. Monet`. We appreciate your comments about the clarity, and we are sorry about the confusion.\", \"to_summarize_sections_2_and_3\": \"1. Inspired by the product key algorithm [1], PEER [2] processes up to a million experts with product key retrieval.\\n2. Despite its computational efficiency, PEER requires to initialize and store $N$ standalone experts, resulting in memory usage that grows linearly with the number of experts, $O(N)$.\\n3. In response to **Q1**, our contribution is partitioning the expert\\u2019s MLP network into two different groups of segments and storing them within $O(\\\\sqrt{N})$ memory constraint. During the training or inference, the learned router dynamically composes expert networks to form $N$ combinations of experts.\", \"below_is_a_comparison_of_time_complexity_for_expert_retrieval_and_space_complexity_for_expert_parameters\": \"| **Model** | **Time Complexity** | **Space Complexity** |\\n| --- | --- | --- |\\n| SMoE | $O(Nd)$ | $O(Nmd)$ |\\n| PEER | $O((\\\\sqrt{N} + k^2)Hd)$ | $O(Nd)$ |\\n| Monet | $O(\\\\sqrt{N}Hd)$ | **$O(\\\\sqrt{N}md)$** |\\n\\nwhere $d$ is the hidden dimension of the expert, $m$ is the dimension of the individual expert, $k$ is the TopK hyperparameter, and $H$ denotes multi-head of the router. \\n\\n- Regarding **Q2**, dimension $m$ can be any value in our architecture, but PEER had to use a fixed value of $m=1$ because of a memory bottleneck.\\n- Regarding **Q3**, specific complexity calculation is present in `Appendix A.2` in our updated manuscript, where the table above provides a brief overview and comparison.\\n- Based on your feedback **Q4**, we have also enhanced the presentations of `Figure 1` in the revision.\\n\\n[1] Guillaume Lample, Alexandre Sablayrolles, Marc\\u2019Aurelio Ranzato, Ludovic Denoyer, and Herv\\u00e9 J\\u00e9gou. Large Memory Layers with Product Keys. In Advances in Neural Information Processing Systems, volume 32, 2019.\\\\\\n[2] Xu Owen He. Mixture of a million experts. arXiv preprint arXiv:2407.04153, 2024.\"}",
"{\"title\": \"Response to Reviewer oJuH (Part 2/2)\", \"comment\": \"> How exactly were the top experts by subdomain chosen for the Gemma-2B SAEs? Note that SAEs have no notion of probability over the \\\"experts\\\", unlike the MONET model, and I could not find this addressed in the paper. Do you pass the hidden SAE activations through a softmax first?\\n> \\n\\nWe referred to the steering methods with SAEs, such as clamping the feature activations [7, 8] based on their logit values. To adhere to the conventional logit-based steering, we analyzed the skewness of SAE\\u2019s logit values, where we determine the feature is specialized in the particular domain only when its highest logit value is at least twice higher than that of the second most activated domain.\\n\\n> The only relevant baseline here is using SAEs at the MLP layers, because this matches the MONET setup; so, the residual stream SAEs seem irrelevant for this work?\\n> \\n\\nWhile SAEs at the MLP layers correspond to the Monet's fine-grained experts, we chose to include residual stream SAE results for comprehensiveness. The MLP-based comparisons demonstrate the core architectural benefits, while the residual stream results provide context within the broader landscape of interpretability research. This allows readers to evaluate Monet's effectiveness against both the most directly comparable baseline and current common practices in the field.\\n\\n> What is the scale in figure 2?\\n> \\n\\nRegarding the scale and full performance of each Monet (ours), Gemma Scope, OLMoE, and LLaMa in MMLU domain unlearning, we have listed in `Appendix E`\\u2019s `Table 11` through `Table 14` for the specifics. Please refer to the revised manuscript, and if you have additional inquiries, we are happy to respond to further questions and comments.\\n\\n> \\u2022 For example, the only interpretability method used as a baseline is patching reconstructions from SAEs for Gemma-2B. However, it is not reported what sparsity these SAEs achieve compared to the (effective?) sparsity of MONET. This makes it difficult to make sense of the results. \\\\\\n\\u2022 The primary goal of SAEs is to find interesting concepts used by the model, and reconstruction is secondary to that (and being able to chain SAE reconstructions is even more secondary). So, ideally the baseline would compare the \\\"monosemanticity\\\" of MONET features vs SAE ones.\\n> \\n\\nWe employed Gemma Scope with 262K features at $L_0 = 263$, its maximum provided sparsity setting. However, direct sparsity comparisons between Monet and SAE models are not methodologically sound due to fundamental architectural differences. While MoE models use top-k routing for sparse expert activation, this mechanism differs fundamentally from SAE's $L_0$ sparsity measure.\\n\\nNevertheless, our Monet's theoretical sparsity would be $L_0$ is 512, derived from $|\\\\mathcal{K}_h^1 \\\\times \\\\mathcal{K}_h^2| = 64$ across 8 multi-head routings. Despite this higher $L_0$ value, which traditionally would suggest lower monosemanticity, Monet achieves superior disjoint unlearning performance, as demonstrated in Figure 3 in our revised manuscript. This indicates that routing-based sparsity may be more effective at isolating and controlling specific knowledge domains compared to traditional SAE approaches.\\n\\nWe thank you again for your constructive comments and for your efforts to improve the quality of our paper. Please let us know if you have any further questions or if we can provide further clarification. \\n\\n[1] Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in GPT. In Advances in Neural Information Processing Systems, volume 35, pp. 17359\\u201317372.\\\\\\n[2] Dmitrii Kharlapenko, neverix, Neel Nanda, and Arthur Conmy. Self-explaining SAE features. AI Alignment Forum, 2024. URL https://www.alignmentforum.org/posts/8ev6coxChSWcxCDy8 \\\\\\n[3] Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, and Mor Geva. Patchscope: A Unifying Framework For Inspecting Hidden Representations of Language Models. arXiv preprint arXiv:2401.06102, 2024.\\\\\\n[4] Haozhe Chen, Carl Vondrick, and Chengzhi Mao. SelfIE: Self-Interpretation of Large Language Model Embeddings. arXiv preprint arXiv:2403.10949, 2024.\\\\\\n[5] John Hewitt, John Thickstun, Christopher D. Manning, and Percy Liang. Backpack Language Models. In Annual Meeting of the Association for Computational Linguistics, 2023.\\\\\\n[6] Alex Tamkin, Mohammad Taufeeque, and Noah D Goodman. Codebook Features: Sparse and Discrete Interpretability for Neural Networks. arXiv preprint arXiv:2310.17230, 2023\\\\\\n[7] Leo Gao, Tom Dupr\\u00e9 la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya\\nSutskever, Jan Leike, and Jeffrey Wu. Scaling and evaluating sparse autoencoders. arXiv preprint\", \"arxiv\": \"2406.04093, 2024.\\\\\\n[8] Adly Templeton*,\\u00a0Tom Conerly*,\\u00a0Jonathan Marcus,\\u00a0Jack Lindsey,\\u00a0Trenton Bricken,\\u00a0et al. Extracting Interpretable Features from Claude 3 Sonnet. Transformer Circuits Thread, 2024, URL https://transformer-circuits.pub/2024/scaling-monosemanticity\"}",
"{\"comment\": \"Dear Reviewer sHPn,\\n\\nThank you for your thoughtful feedback and for recommending the acceptance of our paper due to its strong novelty. We are glad that our responses have addressed your main concerns.\\n\\nAs the author-reviewer discussion period is nearing its end, we wanted to inquire if there are any remaining questions or suggestions you might have. If our revisions have satisfactorily addressed your concerns, we kindly ask you to consider reflecting this in your final evaluation.\\n\\nWe sincerely appreciate your time and contributions to improving our work. Please feel free to share any additional feedback, and we will be more than happy to discuss and incorporate it.\\n\\nThank you once again for your support.\\n\\nBest regards,\\n\\nThe Authors.\"}",
"{\"comment\": \"Dear Reviewer sHPn,\\n\\nThank you for your support and for recommending the acceptance of our paper due to its strong novelty. We are grateful for your positive feedback.\\n\\nWe notice that your overall rating remains unchanged. If there are any remaining concerns or suggestions you have for improving our submission, we would greatly appreciate your guidance. Your insights are valuable to us, and we are committed to addressing any outstanding issues.\"}",
"{\"title\": \"Response to Reviewer oJuH (Part 1/2)\", \"comment\": \"We sincerely thank the reviewer for your helpful and constructive suggestions. In the following response, we will explicate the changes that have been made to the manuscript and a new version uploaded.\\n\\n> Have you tried running the MONET features through an automated interpretability pipeline?\\n\\nWe express gratitude for suggesting such valuable feedback, where we have reflected the changes in `Figure 2` and in the `4.3 Qualitative Results` section in our revised manuscript.\\n\\nThe attached example, [sae-auto-interp](https://github.com/EleutherAI/sae-auto-interp) has its significance in generating explanations of SAE features via external LLM or through the compatible API. We agree that features within the model should be able to be described in natural languages, considering its significance in controlling and managing LLMs.\\n\\nTaking your advice, we decided to take a step further, referring to the Self-explaining SAE features [2]. It states that it has advantages over [sae-auto-interp](https://github.com/EleutherAI/sae-auto-interp) of the following: no max activating dataset examples are needed, and it\\u2019s cheaper by using the model subject of study to generate about its own features rather than relying on a larger model like GPT-4. \\n\\n**Without using external LLMs or APIs, we adapted an automated interpretation framework as `Self-explained Experts`**, where Monet-1.4B CHAT generates a description for its own experts. We have referred to the work of Patchscope [3] and SelfIE [4], where both works prompt LLM to answer \\u201cQ: What is the meaning of the word X? A: Sure! The meaning of the word X is \\u201d, where X serves as a placeholder for target token embedding for the analyses. Similarly, we averaged token embeddings activated for the targeted expert and have inserted them into the aforementioned placeholder. Our Monet-1.4B CHAT generated a description for its experts, like explaining the Expert 232,717 as \\u201cCartilage\\u201d and the Expert 51 as \\u201cExpertise\\u201d, as stated in our revised manuscript. \\n\\n> **Q1**. The paper would benefit from a discussion of, and comparison with, related work, such as backpack language models and codebook features. \\\\\\n**Q2**. Perhaps adding extra bells and whistles like instruction tuning or multimodality distracts from the main goal of the paper, which is to establish the usefulness of the new architecture for interpretability.\\n\\nWe thank your opinion on the paper\\u2019s related works and its main goal. \\n\\n- We have reviewed Backpack LLMs [5] and Codebook Features [6] according to your advice **Q1**, where we found encoding interpretable weights in LLM during pretraining shares similar philosophy in achieving interpretable models. In our `1. Introduction` section, we have reflected the change accordingly.\\n- Furthermore, we value your advice **Q2** and took out the examples of multimodal experts (`Figures 9 and 10`) from the main text and moved to the appendix section. The rationale for staying in the paper is that it is yet unknown whether it is generalizable for fine-grained experts to specialize in and capture monosemantic concepts across modalities with finetuning. We would appreciate it if you could reconsider the significance of analyzing the expandability of our method in LLM\\u2019s multimodal integration to remain in the paper\\u2019s appendix section.\\n- In the case of instruction tuning, the process was a precursor of the automated interpretability pipeline. Adhering to your suggestion, we have excluded the specifics regarding instruction tuning and moved to the Appendix, but we discussed its role in `Self-explained Experts` as we mentioned above in the previous response.\\n\\n> A baseline using the ordinary MLP neurons of the LLaMA model would be very valuable to make the point that MONET discovers more interpretable structure compared to the neuron basis.\\n\\nThank you for your insightful suggestion. In response, we have included the LLaMA unlearning baseline in `Figure 3` and in `5.1 Domain Masking` section of our revised manuscript. \\n\\nIn our experiments, we suppressed domain-specific MLP neurons based on first-layer activations. Inspired by the ROME [1] treating MLP as key-value pairs, we identified neurons with domain specialization based on GELU activations. Specifically, if the highest activation of a particular domain is twice as much as that of the second highest activated domain, we consider that neuron a specialized neuron.\\n\\nFor the results, LLaMA displays an average 6% of neurons to be specialized in each domain compared to Monet's 2.2%, suggesting possible feature entanglement and resulting in significant performance degradation across unrelated domains during knowledge removal. We measured delta performances\\u2019 mean standard deviation of the other 13 domains, resulting in 0.7 for Monet and 1.4 for LLaMa, differing twice as much in stability of knowledge conservation during unlearning. Such results highlight Monet\\u2019s monosemanticity, where experts encapsulate disentangled parametric knowledge across domains.\"}",
"{\"title\": \"General Response to All Reviewers\", \"comment\": \"We sincerely appreciate the reviewers for their thoughtful and constructive feedback, which have greatly contributed to improving our work. We are pleased that the reviewers find our problem statement **important and interesting** (@oJuH), and believe our work may be **influential in the field of interpretable neural networks** (@53hd). Reviewers also consider our proposed architecture **novel and effective** (@oJuH, @53hd, @sHPn), and regard our **experiments as convincing and comprehensive** (@53hd, @sHPn).\\n\\nIn our responses to the reviews, we have carefully addressed all raised concerns. These can be summarized as follows:\\n\\n- **Improved presentation and clarity**: We have enhanced the methods section and Figure 1 to facilitate a clearer understanding of our proposed product key composition. (@53hd, @sHPn, @YJRi)\\n- **Automated interpretation framework**: We have adapted an automated interpretation framework as Self-explained Experts without relying on external LLMs or APIs. This approach is discussed in Section 4.3, with results illustrated in Figure 2. (@oJuH)\\n- **Additional interpretability baselines of OLMoE and LLaMA**: We have incorporated additional interpretability baselines in Section 5.1 (Domain Masking) and illustrated them in Figure 3, where such baselines exhibited polysemanticity in unlearning. (@oJuH, @53hd, @YJRi)\\n- **Additional general performance comparisons**: We conducted additional experiments comparing Monet with the state-of-the-art SMoE architecture OLMoE under matched conditions, demonstrating Monet's superior performance across benchmarks. (@YJRi)\\n- **Complexity calculations**: We have included complexity calculations in Appendix A.2, demonstrating that our method efficiently reduces memory growth to $O(\\\\sqrt{N}md)$, enabling us to scale the expert count to 262,144. (@53hd, @sHPn) \\n \\n | **Model** | **Time Complexity** | **Space Complexity** |\\n | --- | --- | --- |\\n | SMoE | $O(Nd)$ | $O(Nmd)$ |\\n | PEER | $O((\\\\sqrt{N} + k^2)Hd)$ | $O(Nd)$ |\\n | Monet (Ours) | $O(\\\\sqrt{N}Hd)$ | **$O(\\\\sqrt{N}md)$** |\\n\\nWe have incorporated the feedback into our revised paper, highlighting the changes in blue for easy reference. Additional edits have been made to enhance clarity and conciseness. We welcome further questions or comments and will promptly address any concerns.\\n\\nThank you again,\\n\\nThe Authors.\"}",
"{\"title\": \"Response to Reviewer YJRi (Part 2/3)\", \"comment\": \"> No baseline comparison against PEER and traditional SMoE.\\n\\nFollowing your request for a traditional SMoE interpretability baseline, we have included knowledge unlearning of OLMoE [3]. OLMoE LLM with total 6.9B parameters has been selected as the representative baseline for conventional SMoE architectures for two reasons: (1) it has the largest number of experts among the publicly available SMoE LLMs [3-5] and (2) it has been trained with an extensive amount of tokens from various sources.\\n\\n**Monet-VD 1.4B\\u2019s Domain Masking Performance Perturbation in MMLU**\\n\\n||biology|business|chemistry|compsci|economics|engineering|health|history|law|math|other|philosophy|physics|psychology|\\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|$\\\\Delta$ Target Domain|-4.66|-4.61|-5.49|-1.05|-2.32|-4.14|-3.21|-2.14|-0.81|-3.1|-0.37|-1.5|-1.2|-2.59|\\n|$\\\\Delta$ Avg. Other Domains|-0.42|-0.05|-0.28|-0.51|-0.08|-0.06|0.04|-0.21|-0.2|0.03|-0.02|-0.24|-0.28|-0.21|\\n|$\\\\Delta$ Std. Other Domains|0.52|0.9|0.93|0.74|0.69|0.66|0.67|0.57|0.66|0.79|0.7|0.71|0.81|0.61|\\n\\n- Mean of $\\\\Delta$ Target: -2.65\\n- Mean of $\\\\Delta$ Avg. Other: -0.18\\n- Mean of $\\\\Delta$ Std. Other: 0.71\\n\\n**OLMoE 6.9B\\u2019s Domain Masking Performance Perturbation in MMLU**\\n\\n||biology|business|chemistry|compsci|economics|engineering|health|history|law|math|other|philosophy|physics|psychology|\\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|$\\\\Delta$ Target Domain|-1.74|-5.89|-4.46|-9.47|-3.68|-6.9|-4.55|-8.62|-7.98|-6.56|-0.62|-4.74|-2.72|-0.86|\\n|$\\\\Delta$ Avg. Other Domains|-1.33|-2.86|-3.08|-0.4|-1.51|-4.29|-1.67|-3.8|-5|-3.22|-0.27|-1.91|-0.96|-0.66|\\n|$\\\\Delta$ Std. Other Domains|1.3|1.78|2.04|1.18|1.62|2.38|2.08|2.15|2.22|2.51|1.11|1.49|1.55|0.68|\\n\\n- Mean of $\\\\Delta$ Target: -4.91\\n- Mean of $\\\\Delta$ Avg. Other: -2.21\\n- Mean of $\\\\Delta$ Std. Other: 1.72\\n\\nOur additional experimentations suggest that OLMoE may be constituted with polysemantic experts. Results can be summarized as the following: \\n\\n1. In OLMoE, there were extremely few specialized experts for MMLU, based on our criteria of skewness in expert routing score. In the case of Monet, we identified specialized experts if its highest routing score on a particular domain is twice as much as that of the second highest domain. However, OLMoE\\u2019s experts\\u2019 routing score was evenly distributed, making it difficult to detect specialized experts. We leveraged criteria of occurrences in maximum activation to determine the expert\\u2019s domain specialization to obtain the results.\\n2. OLMoE\\u2019s accuracy drop in other domains was significant in unlearning, possibly due to the entangled characteristics of experts since their specializations were only detectable with argmax criteria.\\n3. We measured delta performances\\u2019 mean standard deviation of the other 13 domains, resulting in 0.7 for Monet and 1.7 for OLMoE, differing twice as much, showing disparity in stability of knowledge conservation during unlearning.\\n\\nWe believe that such results suggest that for most of the SMoE architectures [3-6] with 64 experts or less, the expert count is too small to disentangle polysemanticity. Our architecture, on the other hand, has 262,144 experts available, which we believe enable fine-grained specialization, resulting in monosemantic experts that capture mutually exclusive aspects of knowledge. To further address your inquiry, we provide an overview of the unlearning results of Monet, Gemma Scope, OLMoE, and LLaMa in `Figure 3` in our revised paper.\\n\\nDespite the fact we have previously compared the time complexity and space complexity with the PEER baseline, we remind you that additional 100B parameters are needed to constitute a PEER baseline, as we have explained in `Section 2` of our paper. Such exorbitant memory requirements are beyond the scope of most researchers (note that PEER was introduced by Google Deepmind), where our contribution is to achieve parameter efficiency, **precisely because directly implementing the PEER baseline is infeasible**.\"}",
"{\"comment\": \"Thank you for the timely response and the quick revision of your work. These additions greatly improve the presentation of your work.\\n\\nRegarding the autointerpretability, I still think results that compare, using a single autointerpretability pipeline, the interpretability score of MONET experts versus SAE latents, but this seems like a better fit for future work, as it would also require developing a way to fairly compare your architecture to SAEs.\"}",
"{\"title\": \"Thank you for your valuable feedback\", \"comment\": \"Thank you for your support and for endorsing the acceptance of our paper. In the following responses, we address each of the concerns you have raised and have updated the manuscript accordingly (*changes highlighted in magenta).\\n\\n> the autointerpretability evaluation is exploratory and does not demonstrate metrics showing the improved interpretability of MONET compared to other methods;\\n> \\n\\nThank you for raising this important point about quantitative evaluation of automated interpretability. **We want to clarify that primary objective of self-explained experts was to leverage LLMs' internal knowledge and capabilities by eliminating dependencies on external LLMs**, rather than to demonstrate superiority in automated interpretation frameworks. We note that concurrent work in quantitatively measuring automated interpretability of SAEs [1,2] are still in its early stage of development, which we view as an opportunity to develop more comprehensive evaluation protocols.\\n\\nWhile tools like `sae-auto-interp` provide valuable pipelines for generating and evaluating feature explanations, their quantitative evaluation frameworks are currently designed to compare different explanation methods between each LLM, rather than enabling direct comparisons between SAE models. We plan to prioritize developing more robust comparative frameworks in our future work to provide additional numerical assessment of Monet's automated interpretation framework.\\n\\n> the results described in Figure 3 are very interesting, but they are a bit hard to read due to the unclear scale. In particular, looking at Appendix E, I found the last two rows of each table (\\u0394 Target and\\u00a0\\u0394\\u00a0Others) to be most helpful in making sense of this dense data. I would advise somehow surfacing this in the camera ready if the paper is accepted;\\n> \\n\\nThank you for your positive feedback on Figure 3. We appreciate your feedback that, while the results are very interesting, the unclear scale made them hard to read. We agree that relying on Appendix E for clarity might be inconvenient for readers. In response to your suggestion, **we have updated Figure 3 in the revised manuscript to include precise scales.** We believe these editorial changes will enhance the readability of our results. Thank you for bringing this to our attention, and we apologize for any inconvenience the original presentation may have caused.\\n\\n> I feel that the exploration of methods for picking experts was insufficient. I would love to see future work/revisions more thoroughly tuning the choice of experts for each baseline as well as MONET.\\n> \\n\\nThank you for your valuable feedback. We acknowledge that our exploration of methods for selecting experts was insufficient and agree that more thorough tuning is necessary.\\n\\nIn our current work, we used the skewness of the routing score to determine experts' domain specialization and identified toxic experts using the Pearson correlation coefficient between the toxicity score and the routing score. We recognize that these criteria are basic and minimal. \\n\\nOur primary contribution lies in making the LLM transparent, enabling researchers to observe routing scores and directly manipulate the parametric knowledge. We believe that **the routing scores of monosemantic experts allow researchers to observe patterns for retrieving intrinsic knowledge, which were previously opaque in polysemantic LLMs.** We are optimistic that such observations can lead to addressing research questions related to hallucinations (e.g., \\\"Is the model confident in retrieving internal knowledge?\\\") and lifelong learning in LLMs (e.g., \\\"How can we incorporate additional knowledge into the model?\\\"). \\n\\n**Based on your feedback, we have added a \\\"Limitations\\\" section to our paper**, summarizing the discussions above. Thank you once again for your insightful comments, which have been invaluable in guiding the future direction of our research.\\n\\n[1] Jack Lindsey, Hoagy Cunningham, and Tom Conerly. Interpretability Evals for Dictionary Learning. Transformer Circuits Thread, 2024, URL\\u00a0https://transformer-circuits.pub/2024/august-update/index.html#interp-evals\\n\\n[2] CHAUDHARY, Maheep; GEIGER, Atticus. Evaluating Open-Source Sparse Autoencoders on Disentangling Factual Knowledge in GPT-2 Small.\\u00a0arXiv preprint arXiv:2409.04478, 2024.\"}",
"{\"title\": \"Official Comment by Reviewer sHPn\", \"comment\": \"I appreciate the authors for the rebuttal. I recommend the acceptance of the paper due to the strong novelty of the work.\"}",
"{\"summary\": \"This paper presents a new architecture that makes large language models more interpretable with monosemanticity. The authors develop novel decomposition methods to efficiently scale to 262K experts per layer, achieving specialists that focus on single concepts through end-to-end training. The model also enables control over model knowledge (across domains, languages, and toxicity) without degrading performance, outperforming traditional Sparse Autoencoder approaches.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper presents novel decomposition methods that scales traditional MoE to 262k experts.\", \"The paper delivers comprehensive experimental results on the proposed model architecture.\", \"The proposed method achieves good expert specialization, proven under several experimental settings\"], \"weaknesses\": [\"The intuition behind the architecture design is unclear.\", \"The explanation in the methodology section is poor and hard to understand.\"], \"questions\": [\"What's the reason of choosing $512^2$ number of experts?\", \"Are there any trade-offs for adopting Monet over traditional MoE? What is the training time comparison between Monet and LLaMA baseline models?\", \"I suggest the authors to elaborate more on the methodology section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethcis concerns are needed for the paper.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Future works on autointerpretability\", \"comment\": \"Thank you for your encouraging feedback and for acknowledging the improvements in our revised manuscript. We are pleased to hear that the additions have enhanced the presentation of our work.\\n\\n**We concur that this endeavor is well-suited for future work, where we can dedicate effort to develop a robust and fair comparative methodology.** This would not only strengthen the evaluation of our model but also contribute to the broader research community by providing tools to assess interpretability across different architectures.\\n\\nThank you once again for your insightful suggestions. We are committed to advancing this line of research and look forward to exploring these ideas in our future work.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to Reviewer sHPn\", \"comment\": \"We would like to express our gratitude to the reviewer for their constructive response. Below we respond to the weaknesses and questions.\\n\\n> **Q1**. The explanation in the methodology section is poor and hard to understand.\\\\\\n**Q2**. Are there any trade-offs for adopting Monet over traditional MoE?\\\\\\n**Q3**. The intuition behind the architecture design is unclear.\\\\\\n**Q4**. What's the reason of choosing $512^2$ experts?\\n> \\n\\n**Please refer to our updated manuscript**, where we have improved the readability in sections `2. Preliminaries` to through `3. Monet`. We appreciate your comments about the clarity, and we are sorry about the confusion.\", \"to_summarize_sections_2_and_3\": \"1. Inspired by the product key algorithm [1], PEER [2] processes up to a million experts with product key retrieval. \\n2. Despite its computational efficiency, PEER requires to initialize and store $N$ standalone experts, resulting in memory usage that grows linearly with the number of experts, $O(N)$.\\n3. In response to **Q1**, our contribution is partitioning the expert\\u2019s MLP network into two different groups of segments and storing them within $O(\\\\sqrt{N})$ memory constraint. During the training or inference, the learned router dynamically composes expert networks to form $N$ combinations of experts.\", \"below_is_a_comparison_of_time_complexity_for_expert_retrieval_and_space_complexity_for_expert_parameters_to_address_q2\": \"| **Model** | **Time Complexity** | **Space Complexity** |\\n| --- | --- | --- |\\n| SMoE | $O(Nd)$ | $O(Nmd)$ |\\n| PEER | $O((\\\\sqrt{N} + k^2)Hd)$ | $O(Nd)$ |\\n| Monet | $O(\\\\sqrt{N}Hd)$ | **$O(\\\\sqrt{N}md)$** |\\n\\nwhere $d$ is the hidden dimension of the expert, $m$ is the dimension of the individual expert, $k$ is the TopK hyperparameter, and $H$ denotes multi-head of the router. Individual expert dimension $m$ can be any value in our architecture, but PEER had to use a fixed value of $m=1$ because of a memory bottleneck.\\n\\n- Regarding **Q3**, our purpose is to optimize space complexity while maintaining the time complexity of PEER.\\n- For **Q4**, we have followed the product key counts as mentioned in [1] of $512^2$ for our product key composition.\\n\\nThank you for your thoughtful feedback that has helped refine our paper. We welcome any further questions or suggestions that could enhance the contribution of our work to the field.\\n\\n[1] Guillaume Lample, Alexandre Sablayrolles, Marc\\u2019Aurelio Ranzato, Ludovic Denoyer, and Herv\\u00e9 J\\u00e9gou. Large Memory Layers with Product Keys. In Advances in Neural Information Processing Systems, volume 32, 2019.\\\\\\n[2] Xu Owen He. Mixture of a million experts. arXiv preprint arXiv:2407.04153, 2024.\"}",
"{\"summary\": \"In this paper, the authors propose Monet. A new sMoE achitecture built on top of PEER. By pushing the notation of expert to the limit, Monet shows superior performance and unique ability to unlearn domain knowledge by simply masking out experts. Further analyses demonstrate mutual exclusivity of knowledge across experts and showcase the parametric knowledge encapsulated within individual experts.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Simple and straightforward idea\", \"The experiments on domain masking and unlearning is interesting\"], \"weaknesses\": [\"Presentation can be greatly improved. For example, figure 1 does more confusing than explaining. There is zero followup in the caption telling the readers what \\\"E1\\\", \\\"BL2\\\", \\\"TL2\\\" are. Because they are arbitrary abbreviation defined by the authors, they should be properly annotated, or simply just use the full name.\", \"No proper ablations to study different choices in the architectural design and no insight is provided. For example, can we mix Horizontal Expert Decomposition and vertical expert decomposition? Which part of the changes over PEER make it superior?\", \"No baseline comparison against PEER and traditional SMoE. How come these two most obvious baselines are missing?\"], \"some_other_minor_issues\": [\"citation to PEER is missing.\", \"Incremental proposal on top of PEER, I am uncertain how significant the contributions are\"], \"questions\": [\"What does the model start with in table 3?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Baseline Comparisons\", \"comment\": \"Dear Reviewer @YJRi,\\n\\nThank you for your insightful feedback. We understand that your primary concern is how our Monet architecture compares to traditional SMoE architectures in terms of the final quality of the model. As practitioners, we agree that assessing model performance is crucial alongside interpretability.\\n\\nTo address your concern, we conducted additional experiments to provide a direct comparison between Monet and the state-of-the-art SMoE architecture, OLMoE [1]. We ensured a fair evaluation by matching both the number of active parameters and the total number of parameters, as well as training both models on the same amount of data.\\n\\n## **Total Parameter Matched Comparison**\\n\\nIn this setup, both models have a similar total parameter count and are trained on 100 billion tokens.\\n\\n### **Overall Performance**\\n\\n|Model|#Total Params|#Tokens Trained|Zero-shot Avg.|5-shot Avg.|\\n|:-:|:-:|:-:|:-:|:-:|\\n|**Monet (Ours)**|4.1B|100B|**0.511**|**0.550**|\\n|OLMoE|6.9B|100B|0.502|0.534|\\n\\n### **Benchmark Results**\\n\\n**Zero-shot Performance**\\n\\n|Task|MMLU|ARC|WinoGrande|PIQA|SocialIQA|OBQA|HellaSwag|CommonsenseQA|Avg.|\\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|**Monet**|**0.380**|**0.547**|**0.557**|0.751|**0.437**|**0.424**|0.604|0.389|**0.511**|\\n|OLMoE|0.349|0.521|0.551|**0.754**|0.432|0.384|**0.620**|**0.402**|0.502|\\n\\n**5-shot Performance**\\n\\n|Task|MMLU|ARC|WinoGrande|PIQA|SocialIQA|OBQA|HellaSwag|CommonsenseQA|Avg.|\\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|**Monet**|**0.398**|**0.625**|**0.564**|**0.761**|**0.470**|**0.438**|0.619|0.525|**0.550**|\\n|OLMoE|0.359|0.542|0.555|0.757|0.453|0.410|**0.637**|**0.561**|0.534|\\n\\n## **Active Parameter Matched Comparison**\\n\\nTo ensure an apples-to-apples comparison within our limited time frame, we conducted the active parameter matched experiments over a shorter training period. Both models have the same number of active parameters (1.3B) and were trained on 20 billion tokens.\\n\\n### **Overall Performance**\\n\\n|Model|#Active Params|#Tokens Trained|Zero-shot Avg.|5-shot Avg.|\\n|:-:|:-:|:-:|:-:|:-:|\\n|**Monet (Ours)**|1.3B|20B|**0.457**|**0.479**|\\n|OLMoE|1.3B|20B|0.432|0.453|\\n\\n### **Benchmark Results**\\n\\n**Zero-shot Performance**\\n\\n|Task|MMLU|ARC|WinoGrande|PIQA|SocialIQA|OBQA|HellaSwag|CommonsenseQA|Avg.|\\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|**Monet**|**0.327**|**0.473**|**0.533**|**0.711**|0.418|**0.368**|**0.490**|0.338|**0.457**|\\n|OLMoE|0.298|0.405|0.513|0.697|**0.421**|0.334|0.447|**0.343**|0.432|\\n\\n**5-shot Performance**\\n\\n|Task|MMLU|ARC|WinoGrande|PIQA|SocialIQA|OBQA|HellaSwag|CommonsenseQA|Avg.|\\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n|**Monet**|**0.334**|**0.531**|**0.521**|**0.703**|**0.437**|**0.356**|**0.502**|**0.449**|**0.479**|\\n|OLMoE|0.306|0.454|0.517|0.694|0.432|0.316|0.463|0.441|0.453|\\n\\n### **Discussion**\\n\\nThe results indicate that Monet consistently outperforms the traditional SMoE model across multiple benchmarks in both zero-shot and 5-shot settings. By matching both the total and active parameter counts, we ensured that the performance gains are attributable to the architectural differences rather than model size or training data volume. **These findings demonstrate that Monet not only offers improved interpretability but also delivers superior performance compared to conventional SMoE architectures.**\\n\\nWe have revised the manuscript accordingly to include these comparisons and address your feedback. We appreciate your suggestion, as it encouraged us to perform this comprehensive comparison. We hope this addresses your concern regarding the final quality of the model. Please let us know if you have any further questions or suggestions.\\n\\n[1] Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, et al. OLMoE: Open Mixture-of-Experts Language Models. arXiv preprint arXiv:2409.02060, 2024.\"}",
"{\"title\": \"Thank you for the effort on showing proper controlled experiments\", \"comment\": \"This is exactly what i need to see to be convinced. Best of luck!\"}",
"{\"comment\": \"Dear Reviewer oJuH,\\n\\nThank you for your thoughtful and constructive feedback on our manuscript. We have thoroughly addressed your comments and submitted a revised version for your consideration. We would greatly appreciate it if you could review our responses and the updated manuscript at your earliest convenience. We understand the demanding nature of the review process and are grateful for the time and effort you are dedicating to our work.\"}",
"{\"title\": \"Re: Comparison with baseline\", \"comment\": \"This doesn't answer my question fully: how this architecture choice fares against traditional architecture in apple to apple comparison in terms of final quality of model.\\n\\nThe perspective on interpretability is interesting and I get the point. But it's not my main concern. As practitioner, this is a very important question to be answered.\"}",
"{\"title\": \"Response to rebuttals\", \"comment\": [\"Thank you for the detailed and thorough rebuttal. I think these new additions, especially the new baselines, improve the paper. I have a few remaining important concerns:\", \"the autointerpretability evaluation is exploratory and does not demonstrate metrics showing the improved interpretability of MONET compared to other methods;\", \"the results described in Figure 3 are very interesting, but they are a bit hard to read due to the unclear scale. In particular, looking at Appendix E, I found the last two rows of each table ($\\\\Delta$ Target and $\\\\Delta$ Others) to be most helpful in making sense of this dense data. I would advise somehow surfacing this in the camera ready if the paper is accepted;\", \"I feel that the exploration of methods for picking experts was insufficient. I would love to see future work/revisions more thoroughly tuning the choice of experts for each baseline as well as MONET.\", \"Thanks again to the authors for the very detailed and thorough response. I am raising my score as a result of these improvements.\"]}",
"{\"metareview\": \"The reviewers commended the novel approach to embedding interpretability directly into large language models. By introducing sparse coding layers inspired by Mixture of Experts (MoE), the model achieves sparsity and interpretability without compromising performance. Reviewers highlighted its ability to selectively erase domain-specific knowledge, enhance safety, and enable practical applications, all while maintaining performance parity with LLaMA models on key benchmarks. The comprehensive experimental evaluations were widely praised, particularly MONET's robustness across diverse settings. The rebuttal addressed key concerns, added baselines, and clarified results.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal addressed key concerns, added baselines, and clarified results.\"}",
"{\"summary\": \"The paper proposes a new transformer architecture that replaces MLP layers in the standard decoder-only transformer architecture with a type of sparse coding layer which encourages only a small number of hidden neurons to activate on each given input. The construction is also motivated by, and borrows ideas from, the mixture of experts (MoE) literature. The primary motivation of this new architecture is to help interpretability by building something akin to a wide Sparse Autoencoder (SAE) into the MLP layers of the decoder-only transformer architecture in a scalable way, so that we can directly train for sparse (and thus hopefully interpretable) internal activations.\", \"in_more_detail\": [\"the MLP layer is viewed as an associative memory, and replaced with a sparsely activating version inspired by the paper \\\"Large memory layers with product keys\\\".\", \"The MLP layer is replaced by multiple smaller MLP subnetworks (\\\"experts\\\") that share parameters in a specific way inspired by the product idea from \\\"Large memory layers with product keys\\\" to effectively represent many experts using only a few trainable parameters.\", \"A sparse subset of the experts is chosen to produce the final output as an expectation over these layers' outputs (similar to attention)\", \"There are other engineering optimizations used to make the computation more efficient.\", \"Finally, auxiliary loss terms are added, encouraging the experts to activate uniformly on average (\\\"load balancing\\\") and each token to have a highly activating expert (ambiguity loss).\", \"This new architecture is trained on 100B tokens sampled from the FineWeb-Edu dataset (a subset of experiments also uses a programming dataset), using LLaMA trained on the same dataset as a baseline across approx. 850M, 1.4B and 4.1B parameters. The MONET architecture uses an effective count of $2^18=262,144$ experts. Comparisons on question-answering benchmarks such as MMLU show that the architecture performs mostly on par with the LLaMA baseline.\", \"As an additional baseline, SAEs for Gemma 2B are used to patch in Gemma-2B's original activations, and the performance drop due to the SAEs is measured.\", \"Some qualitative analyses of the contexts that activate a given expert subnetwork are performed.\", \"The architecture is then applied to selectively delete model knowledge in three setups: subject-specific knowledge in MMLU (e.g. delete only knowledge of chemistry but not economics etc.), programming language-specific knowledge on a code dataset (e.g. delete only knowledge of Python but not Java), and purging toxic experts.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper tackles an interesting and important question for the field: instead of interpreting LLMs post-hoc, can we directly train them in a way that results in interpretable weights?\", \"This adds to existing work, such as backpack LLMs https://arxiv.org/abs/2305.16765 and codebook features https://arxiv.org/abs/2310.17230\", \"The proposed architecture is interesting, can (in principle) represent a large number of experts, and performs on par with the LLaMA baseline of roughly the same parameter count.\", \"The applications to targeted erasure of knowledge are very interesting and relevant to the field.\", \"The writing is clear\"], \"weaknesses\": [\"The lack of detailed interpretability baselines makes it difficult to evaluate the strength of the results.\", \"For example, the only interpretability method used as a baseline is patching reconstructions from SAEs for Gemma-2B. However, it is not reported what sparsity these SAEs achieve compared to the (effective?) sparsity of MONET. This makes it difficult to make sense of the results.\", \"The only relevant baseline here is using SAEs at the MLP layers, because this matches the MONET setup; so, the residual stream SAEs seem irrelevant for this work?\", \"Furthermore, SAEs are trained to reconstruct activations coming from the original model being studied, and iteratively applying the SAE reconstructions to MLP layers may take downstream activations off-distribution, leading to an accumulation of errors due to SAE composition. You may argue that this is just a drawback of the SAE paradigm that MONET avoids, and the comparison is still fair. However, from my point of view, the primary goal of SAEs is to find interesting concepts used by the model, and reconstruction is secondary to that (and being able to chain SAE reconstructions is even more secondary). So, ideally the baseline would compare the \\\"monosemanticity\\\" of MONET features vs SAE ones.\", \"A baseline using the ordinary MLP neurons of the LLaMA model would be very valuable to make the point that MONET discovers more interpretable structure compared to the neuron basis\", \"The paper would benefit from a discussion of, and comparison with, related work, such as backpack language models and codebook features.\", \"Perhaps adding extra bells and whistles like instruction tuning or multimodality distracts from the main goal of the paper, which is to establish the usefulness of the new architecture for interpretability (which I believe can be achieved or falsified in a more basic setup)\"], \"questions\": [\"How exactly were the top experts by subdomain chosen for the Gemma-2B SAEs? Note that SAEs have no notion of probability over the \\\"experts\\\", unlike the MONET model, and I could not find this addressed in the paper. Do you pass the hidden SAE activations through a softmax first?\", \"What is the scale in figure 2?\", \"Have you tried running the MONET features through an automated interpretability pipeline like https://github.com/EleutherAI/sae-auto-interp?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1OGhJCGdcP | Learning subgoal representations from state graphs in goal-conditioned hierarchical reinforcement learning | [
"Shuyuan Zhang",
"Zihan Wang",
"Xiao-Wen Chang",
"Doina Precup"
] | The integration of graphs with Goal-conditioned Hierarchical Reinforcement Learning (GCHRL) has recently gained attention, as the intermediate goals (subgoals) can be effectively sampled from graphs that naturally represent the overall task structure in most RL tasks. However, some
existing approaches often rely on domain-specific knowledge to construct these graphs, limiting their applicability to new tasks.
Other graph-based approaches create graphs dynamically during exploration but struggle to fully utilize them because they have problems passing the information in the graphs to newly visited states.
Additionally, current GCHRL methods face challenges such as sample inefficiency and poor subgoal representations. In this paper, we present a solution to these issues through the development of a graph encoder-decoder that can evaluate unseen states.
Our proposed method, Graph-Guided sub-Goal representation Generation RL (G4RL), can be incorporated into any existing GCHRL method to enhance performance.
We show that the graph encoder-decoder can be effectively implemented using a network trained on the state graph generated during exploration. Empirical results indicate that leveraging high and low-level intrinsic rewards from the graph encoder-decoder significantly enhances the performance of state-of-the-art GCHRL approaches in both dense and sparse reward environments. | [
"Reinforcement Learning",
"Graph Representation Learning",
"Hierarchical Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=1OGhJCGdcP | https://openreview.net/forum?id=1OGhJCGdcP | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rL4kNiOhLy",
"pBN3M7ObzZ",
"YSqjRWChaC",
"Qe7ZSW9GYO",
"O3xo5kXpG0",
"HEIVsMPqQm",
"EkMRyXbGad",
"CarYKz9Lbs"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision",
"official_comment",
"meta_review",
"official_comment",
"official_review"
],
"note_created": [
1729938120559,
1730691023106,
1730539174159,
1737523566915,
1731965052985,
1734752327032,
1732050865763,
1730722259740
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3278/Reviewer_j8kK"
],
[
"ICLR.cc/2025/Conference/Submission3278/Reviewer_ngDJ"
],
[
"ICLR.cc/2025/Conference/Submission3278/Reviewer_1153"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3278/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3278/Area_Chair_Zv9u"
],
[
"ICLR.cc/2025/Conference/Submission3278/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3278/Reviewer_tyfX"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a novel architecture that employs a graph encoder-decoder to summarize spatial information into subgoal representations and constructs a world model based on the state graph for the agent to generate auxiliary rewards in both the high-level and low-level policies.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper introduces the G4RL approach with a degree of originality, and the presentation is clear, effectively explaining the proposed method in a way that is easy to follow.\", \"weaknesses\": [\"The primary experiments are conducted in a limited range of environments.\", \"The ablation studies are insufficient, lacking a comprehensive analysis of key parameters such as $\\\\epsilon_{d}$, $\\\\alpha_{h}$, $\\\\alpha_{l}$, and $N$. The existing experimental results do not adequately support the significance of these parameters as stated in the methods section.\", \"There is no comparison with other representation methods to demonstrate the advantages or disadvantages.\", \"The learned world model is influenced by the current policy distribution, and it may not accurately reflect the actual world model.\"], \"questions\": [\"How is the state representation function $\\\\phi$ implemented? For example, is it based on neural networks or dimensionality reduction? Please provide specific details of the implementation.\", \"What impact do the values of $\\\\epsilon_{d}$ and parameters $\\\\alpha_{h}$, $\\\\alpha_{l}$, and $N$ have on the algorithm's performance?\", \"Can it be qualitatively or quantitatively demonstrated that the graph encoder-decoder effectively extracts spatial information?\", \"Has the representation of subgoal spatial distance been compared with other methods, such as [1]? Does it show advantages over these approaches?\", \"If the author can address the aforementioned weaknesses and questions, I will consider increasing the score.\", \"[1]Park, Seohong, Tobias Kreiman, and Sergey Levine. \\\"Foundation Policies with Hilbert Representations.\\\" Forty-first International Conference on Machine Learning.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper focuses on Goal-conditioned Hierarchical Reinforcement Learning (GCHRL) setting and introduces a graph encoder-decoder that can evaluate unseen states and enhance performance. This encoder-decoder can be trained on data generated during exploration, and by leveraging the high and low-level intrinsic rewards from the graph encoder-decoder improves performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper focuses on an important problem of integrating graphs with Goal-conditioned Hierarchical Reinforcement Learning and improving performance.\\n2. The work provides a good motivation for the research problem and its importance.\", \"weaknesses\": \"1. The paper can benefit from improving the writing and cleaning up the list of their contributions.\\n2. The set of environments / task settings is limited and it would be beneficial to add more tasks.\\n3. In some results, the methods are pretty similar. Running more seeds or increasing the difficulty of the experiments could be useful to pull the methods apart.\", \"questions\": \"1. The settings and environments considered in the experiments are relatively simple. How does the method scale up?\\n2. How sensitive is the method to the value of K : the number of timesteps used by the high-level policy to propose a goal? Is it same across different tasks?\\n3. How many seeds were used for the experiments and how were they chosen?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a novel graph encoder-decoder approach designed to evaluate previously unseen states, which can be integrated with existing GCHRL (Graph-based Hierarchical Reinforcement Learning) methods. The proposed model is trained on state graphs generated during exploration, and the authors demonstrate its effectiveness through empirical evaluation. Results indicate improved performance in both dense and sparse reward environments, driven by multi-level intrinsic rewards derived from the graph encoder-decoder.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper is clearly written and easy to follow.\", \"The proposed method improves upon baseline methods in the AntMaze and AntGather tasks.\"], \"weaknesses\": [\"The proposed method does not function as a subgoal representation learning approach but rather predicts state affinity.\", \"The paper lacks strong positioning within the subgoal representation learning literature. It cites only one relevant work and does not provide adequate motivation or comparison with existing methods in this area.\", \"The method (G4RL) shares significant similarities with HRAC, raising several concerns: 1. G4RL constructs graphs by hard-thresholding distances in state feature space, while HRAC uses K-step affinity along trajectories. As a result, G4RL is both feature- and hyperparameter-dependent, introducing limitations. 2. HRAC applies a contrastive loss to ensure that the learned subgoal space adheres to a K-step adjacency constraint while preventing subgoals from being too close. How does G4RL regularize representation learning in the latent space? 3. What is the rationale behind combining G4RL with HRAC (i.e., HRAC-G4RL)? Does G4RL require HRAC's regularization in the latent space?\", \"The evaluation is limited in several respects: 1. The method is only tested on the AntMaze and AntGather tasks. 2. It is only compared to two pre-2020 methods, HIRO and HRAC, without including more recent subgoal representation learning methods such as LESSON, HESS, and HLPS.\", \"There is insufficient analysis of the method's sensitivity to hyperparameters, such as how \\\\epsilon depends on the environment and state space features.\"], \"questions\": \"Please address my questions in the weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you very much for the time and energy you have spent reviewing this paper.\\n\\nThe problem and suggestions mentioned are quite relevant to our paper so we organized them into the following points for better communication.\\n\\n1. Complexity issue: Introducing the graph encoder-decoder indeed increases the complexity of the architecture thus leading to increased training time. To reduce the extra cost, we propose to stop updating the graph and freeze the parameters in the graph encoder when our architecture reaches the intended performance in practical applications. \\n\\nAnother direction for simplifying the implementation is to use estimations of distances instead of distances deducted from sampled trajectories in the original state space for the training of the graph encoder-decoder. We tried to use distance estimators in unreported experiments and noticed that they could reduce the training time and still boost the original method's performance, although not as much as the version using actual distance as reported in our paper.\\n\\n2. We will consider using metrics most suitable to the underlying task. To compare with recent GCHRL methods, we intend to use the same environments and metrics in their corresponding papers. For practical applications, we will focus on the metric(s) that are most related to the task objective. \\n\\nAlso, adding the training/running time of G4RL and the underlying GCHRL method as a metric is necessary.\\n\\nThank you again for your detailed review of our paper! We are grateful for your many suggestions and hopefully, our replies could help you understand our work better.\"}",
"{\"metareview\": \"This paper addresses the limitations of integrating graphs with GCHRL by introducing a graph encoder-decoder that effectively evaluates unseen states, thereby enhancing subgoal representation and addressing sample inefficiency. The proposed method G4RL can be incorporated into existing GCHRL frameworks to improve performance, utilizing a network trained on state graphs generated during exploration.\\nDespite these contributions, according to the reviewers' feedback, this paper still has to be improved on several fronts, including addressing the increased complexity of the graph encoder-decoder without substantial performance gains, insufficient comparisons with recent GCHRL methods, limited task settings, and a lack of clarity in its contributions, as well as concerns regarding its positioning within the subgoal representation literature and the evaluation of its hyperparameter sensitivity.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not respond to all the reviewers, and the further rebuttal discussion is not responsive.\"}",
"{\"comment\": \"Thank you very much for the time and energy you have spent reviewing this paper.\", \"here_are_our_answers_to_your_concerns_point_by_point\": \"1. The settings and environments considered in the experiments are relatively simple. How does the method scale up?\\n\\nThe strategies we proposed in this paper essentially act as add-ons to existing methods, thus as soon as those baseline methods can be applied to more complex environments, one can choose to add G4RL for better performance. We understand your concerns regarding the limited scope of experiments. The complexity of these experiments is more than it seems to the eye. To understand this, we need to consider that the complexities of our environments are focused on reasoning challenges such as sparse reward and partial observability instead of learning from complex visual observations. The truth is the environments that we have employed are very difficult for traditional agents to solve.\\n\\nNevertheless, adding more tasks and environmental settings is beneficial and we are working on enriching our experiment part.\\n\\n2. How sensitive is the method to the value of K : the number of timesteps used by the high-level policy to propose a goal? Is it same across different tasks?\\n\\nIn all of our experiments, we set K to 10. Comprehensively, a large K would increase the variance of the learning signal and make the high-level agent more difficult to converge, while a small K would prolong the temporal scale of the planning process and impair HRL methods' advantage over non-hierarchical ones. We used 10 in all of our experiments because our method reaches its best performance under this setting.\\n\\n3.How many seeds were used for the experiments and how were they chosen?\\n\\nDue to limited computing resources 5 seeds were used for each experiment and the seeds were chosen randomly.\\n\\nThank you very much for your questions and suggestions!\"}",
"{\"summary\": \"This paper presents a novel approach\\u2014Graph-Guided sub-Goal representation Generation RL (G4RL)\\u2014aimed at addressing several key issues faced by existing Goal-conditioned Hierarchical Reinforcement Learning (GCHRL) methods, including sample inefficiency and poor subgoal representations. By introducing a graph encoder-decoder architecture, G4RL effectively leverages the state graph generated during exploration to enhance the performance of existing GCHRL methods. Empirical results demonstrate performance improvements in both dense and sparse reward environments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tInnovation: The introduction of a graph encoder-decoder offers a novel perspective on GCHRL, facilitating the online construction of state graphs that yield more effective subgoal representations.\\n2.\\tGeneralizability: G4RL can be integrated into any existing GCHRL algorithm, making it versatile and applicable across various contexts.\", \"weaknesses\": \"1.\\tIncreased Complexity: Although the graph encoder-decoder adds new functionality, the added complexity does not yield a substantial performance improvement over existing HRAC methods. This raises concerns about implementation and debugging challenges without corresponding benefits.\\n2.\\tInsufficient Comparisons: The paper lacks comparisons with several recent GCHRL methods, which limits the assessment of the proposed approach's advancements and advantages over established techniques.\", \"questions\": \"1. Complexity Management: How do the authors plan to manage the increased complexity introduced by the graph encoder-decoder in practical applications? Are there any proposed strategies to simplify the implementation while retaining performance benefits?\\n2. Comparison Metrics: What specific metrics do the authors plan to use in future work to compare G4RL against recent GCHRL methods? Will they consider not only performance but also computational efficiency of integration?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
1Nwsqw0sTm | Open-Vocabulary Object Detection for Incomparable Spaces | [
"Masoumeh Zareapoor",
"Pourya Shamsolmoali"
] | In open-vocabulary object detection (OVDet), specifying the object of interest at inference time opens up powerful possibilities, allowing users to define new categories without retraining the model. These objects can be identified through text descriptions, image examples, or a combination of both. However, visual and textual data, while complementary, encode different data types, making direct comparison or alignment challenging. Naive fusion approaches often lead to misaligned predictions, particularly when one modality is ambiguous or incomplete. In this work, we propose an approach for OVDet that aligns relational structures across these incomparable spaces, ensuring optimal correspondence between visual and textual inputs. This shift from feature fusion to relational alignment bridges the gap between these spaces, enabling robust detection even when input from one modality is weak. Our evaluation on the challenging datasets demonstrates that our model sets a new benchmark in detecting rare objects, outperforming existing OVDet models. Additionally, we show that our multi-modal classifiers outperform single-modality models and even surpass fully-supervised detectors. | [
"Multimodal learning",
"object detection"
] | https://openreview.net/pdf?id=1Nwsqw0sTm | https://openreview.net/forum?id=1Nwsqw0sTm | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"owq9wVxyNL",
"Rb5EwJGfVb",
"KirN9g9qvb",
"Jh9zHeYKIY"
],
"note_type": [
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1733093233778,
1730548612168,
1730639121734,
1730558330069
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10775/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10775/Reviewer_3vd5"
],
[
"ICLR.cc/2025/Conference/Submission10775/Reviewer_JSDD"
],
[
"ICLR.cc/2025/Conference/Submission10775/Reviewer_Jt2n"
]
],
"structured_content_str": [
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper introduces a novel open-vocabulary detection method that utilizes both textual and visual classifiers, integrating them through feature-level alignment and relational alignment. The author conducts experiments on LVIS to demonstrate its performance on novel categories.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The feature-level alignment and relational alignment for fusing textual and visual classifiers is very interesting.\\n2. The weighted contextual embeddings and prototype discovery respectively optimize the methods for constructing textual and visual classifiers.\", \"weaknesses\": \"1.\\tThe method\\u2019s pipeline is similar to MMOVD[1], with only minor improvements made to the construction and fusion of classifiers. Overall, the novelty might be quite limited.\\n2.\\tThe method is similar to MMOVD, but lacks critical experiments comparing it with MMOVD, such as evaluations using IN-LVIS as extra data on the LVIS dataset, and MMOVD\\u2019s evaluations on cross-dataset transfer detection.\\n3.\\tThere are missing experiments that prove the effectiveness of the method. 1) Lack of experiments demonstrating that weighted contextual embeddings improve the performance of a text-based classifier compared to simply averaging; 2) Lack of experiments showing that using feature-level alignment and relational alignment is more effective compared to naive fusion strategies like addition.\\n4.\\tThe comparison experiments between V-CLS and V-Mean are not reasonable. V-CLS, compared to V-Mean, uses both the prototype discovery strategy and additional transformer blocks as the Visual Aggregator. This setup does not validate the effectiveness of the prototype discovery strategy. According to MMOVD[1], using a Visual Aggregator already performs better than directly averaging various visual embeddings. V-CLS should be compared with a Visual Aggregator that does not use the prototype discovery strategy.\\n5.\\tThere is a lack of hyperparameter analysis for $\\\\lambda$ and $\\\\alpha$.\\n6.\\tResults of open vocabulary object detection evaluations on the COCO dataset are missing.\\n\\n[1] Multi-Modal Classifiers for Open-Vocabulary Object Detection, ICML 2023\", \"questions\": \"1. Figure 2 attempts to demonstrate the model\\u2019s effectiveness in detecting rare categories, but the examples provided belong to either the frequent or common categories, which does not prove the model\\u2019s capability in detecting rare categories. For instance, \\u2018knife\\u2019, \\u2018skateboard\\u2019, \\u2018belt\\u2019, \\u2018pillow\\u2019, and \\u2018bicycle\\u2019 are all frequent categories, while \\u2018rhinoceros\\u2019, \\u2018goose\\u2019, \\u2018kiwi\\u2019, and \\u2018gull\\u2019 belong to common categories.\\n2. Please refer to the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper addresses the challenges of open-vocabulary object detection (OVDet), where the goal is to detect objects at inference time that were not seen during training. The authors propose an approach called VOCAL (Vocabulary Alignment Classifier), which integrates visual and textual embeddings by aligning both feature-level and relational structures across these two modalities. This method aims to bridge the gap between visual and textual data, enabling robust detection even when input from one modality is weak or ambiguous.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. It combines textual descriptions and visual examples to identify objects, leveraging the strengths of both modalities to improve detection accuracy.\\n2. Instead of simple feature fusion, VOCAL focuses on aligning the contextual relationships between objects in text and images, which is a novel way to handle the misalignment problem in heterogeneous data. The model can adapt to new categories or unseen objects without retraining, which is a significant advantage in dynamic environments where new objects frequently appear.\\n3. The evaluation shows that the model outperforms existing OVDet models, setting new benchmarks in detecting rare objects.\", \"weaknesses\": \"1. The method involves complex alignment mechanisms that could be computationally expensive and may require substantial resources for training and inference.\\n2. The performance of VOCAL heavily relies on the quality of the text and image embeddings. If the embeddings are not representative, the alignment may not be effective.\\n3. While the model can adapt to new categories, the scalability to a very large number of categories or extremely rare objects is not explicitly discussed and could be a challenge.\\n4. Although the paper mentions cross-dataset transfer, the generalization of the model to datasets outside of the trained domain is a potential concern that may require further validation.\", \"questions\": \"Pls see the weeknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduce an approach for open-vocabulary object detection (OVDet) that aligns relational structures across visual and textual data to enhance the detection of objects, especially unseen or rare objects. The authors propose a model called VOCAL (Vocabulary Alignment Classifier) that shifts from feature fusion to relational alignment, bridging the gap between visual and textual inputs. VOCAL leverages both text descriptions and image examples to identify objects, addressing limitations such as lexical ambiguity, lack of visual specificity, and unknown class names. The evaluation on challenging datasets shows that VOCAL outperforms existing OVDet models and even surpasses fully-supervised detectors in detecting rare objects.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents a sophisticated method for OVDet that focuses on relational alignment between visual and textual data, which is a novel approach in the field.\\n\\n2. VOCAL demonstrates superior performance in detecting rare objects and outperforms existing OVDet models, which is a significant achievement.\\n\\n3. The model demonstrates a new benchmark in detecting rare objects and outperforms existing OVDet models, which is a substantial achievement.\", \"weaknesses\": \"1. The approach may be more complex and computationally intensive than simpler fusion methods, which could be a limitation in resource-constrained environments.\\n\\n2. The introduction of the Image and Text Encoder results in a detection process that requires more computation, and fairness compared to other OVDet methods needs to be considered.\\n\\n3. Some related OVDet methods are missing. For example, Distilling DETR with Visual-Linguistic Knowledge for Open-Vocabulary Object Detection ICCV 2023.\", \"questions\": \"See Weaknesses. My major concern is introducing much more complexity compared with previous methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1NprT9Kz0d | TexTailor: Customized Text-aligned Texturing via Effective Resampling | [
"Suin Lee",
"Daeshik Kim"
] | We present TexTailor, a novel method for generating consistent object textures from textual descriptions. Existing text-to-texture synthesis approaches utilize depth-aware diffusion models to progressively generate images and synthesize textures across predefined multiple viewpoints. However, these approaches lead to a gradual shift in texture properties across viewpoints due to (1) insufficient integration of previously synthesized textures at each viewpoint during the diffusion process and (2) the autoregressive nature of the texture synthesis process. Moreover, the predefined selection of camera positions, which does not account for the object's geometry, limits the effective use of texture information synthesized from different viewpoints, ultimately degrading overall texture consistency. In TexTailor, we address these issues by (1) applying a resampling scheme that repeatedly integrates information from previously synthesized textures within the diffusion process, and (2) fine-tuning a depth-aware diffusion model on these resampled textures. During this process, we observed that using only a few training images restricts the model's original ability to generate high-fidelity images aligned with the conditioning, and therefore propose an performance preservation loss to mitigate this issue. Additionally, we improve the synthesis of view-consistent textures by adaptively adjusting camera positions based on the object's geometry. Experiments on a subset of the Objaverse dataset and the ShapeNet car dataset demonstrate that TexTailor outperforms state-of-the-art methods in synthesizing view-consistent textures. | [
"3D texture synthesis",
"diffusion model",
"resampling"
] | Accept (Poster) | https://openreview.net/pdf?id=1NprT9Kz0d | https://openreview.net/forum?id=1NprT9Kz0d | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zXhu56q5lj",
"xQlIcBNtvn",
"uzRzKXqWKI",
"oJKgdy1hKM",
"lu2Ewyduai",
"foheduyAwQ",
"e0IkVBTk1z",
"dfJeQsjC8F",
"cya0l8OmnM",
"ckzzdnhCPB",
"bUiTcGxZDK",
"ZqRiAGQrUf",
"ZJ37epykBw",
"XbxP5Om8Ti",
"XJZS0RTOPd",
"WpfNOs2ZxC",
"TnvsOjxQZA",
"ScYDejV7Jk",
"LrH1w6sQMy",
"J3yA5bSnXC",
"J3ZSpoCQzB",
"Is0TJuG7g5",
"H7wYsecfxF",
"Gs4TnZYcDH",
"FxV9hHiUoO",
"DO990akS2Q",
"D7sRK6fBfu",
"Cdudz41PIk",
"CEwgDFPrV2",
"7V6OW9ZPXb",
"3RyeNQQEiz",
"1syti3c3hS",
"10ZouuFLuM"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1729989799617,
1732711095612,
1732367156996,
1732452613113,
1732484509192,
1732366968651,
1732380816404,
1732422951311,
1732484720232,
1732367014146,
1732531200573,
1730386070432,
1737524129666,
1732465902717,
1730533102781,
1729536865142,
1732484690933,
1732484583275,
1732465949377,
1732484637533,
1734743708292,
1732367133215,
1732748026401,
1732484524191,
1732367101726,
1732380795394,
1732531214278,
1732452871957,
1732627023097,
1732380712146,
1732465930358,
1732710879661,
1732380832483
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_ujQC"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_ujQC"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_3JDj"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_uECh"
],
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_NBoV"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Area_Chair_t5Dm"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_ujQC"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_3JDj"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Reviewer_uECh"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11533/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"The paper focuses on **consistent texture synthesis**. The authors analyze the artifacts in the current approaches and propose a new approach, **TexTailor**, to keep synthesized textures consistent across different viewpoints. TexTailor equips with a resampling scheme for integrating previous textures, a finetuned depth-aware T2I model trained with performance preservation loss, and an adaptive viewpoint refinement strategy for inpainting.\\n\\nThe authors evaluate the performance of TexTailor on a subset of Objaverse dataset, and showcases that TexTailor outperforms state-of-the-art methods.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"## Motivation\\nThe paper starts with an analysis of the limitations of previous methods. It hypothesizes those inconsistent results from previous methods are mainly coming from an inappropriate way of integrating information from previously synthesized textures. Given this agile insight, it tries to addresses the inconsistency issue by proposing a new approach to better use information across different viewpoints and previously synthesized textures. \\n\\nThe motivation of the paper is more about a technical aspect. The analysis of previous approaches makes sense. \\n\\n## Method\\n- In Section 3.2, the problem of ControlNet for incorporating multi-views is interesting. \\n- In Section 3.3, the analysis of setting viewpoints sounds interesting (Line 303-310). Using a proportion (Eqn. (12)) is an intuitive way. \\n\\n## Experiments\\n- TexTailor outperforms the previous methods in terms of view consistency and quality, as shown in Table 1. \\n- The ablation study shows a progressive improvement of each component. \\n\\nThe authors also show the limitation of TexTailor - the processing time could be further improved.\", \"weaknesses\": [\"What concerns me the most in this paper is the motivation behind some technical parts and its unclear writing.\", \"## Motivation\", \"In Line 93, it is not clear to me why finetuning a depth-aware T2I model matters. Maybe including a brief explanation could be helpful.\", \"## Method\", \"In Section 3.1, the authors propose a non-Markov process to reduce the sampling steps. However, the benefits of it is confusing to me. Would it involve a faster sampling speed? If it would, there is not result to support it. On the other hand, the authors mainly show the effects of resampling is to \\\"preserve the texture properties\\\" (Line 480). This makes me confused about the motivation of newly proposed resampling trick.\", \"## Experiments\", \"It does not make sense to me the authors choose to not compare with text-driven methods (Line 373-374) just because they have \\\"difficulties\\\" when optimizing textures for \\\"cars\\\". Wouldn't it be a good chance to showcase the superiority of TexTailor?\", \"The authors do not show any viewpoint-varying results in video format, making it less convincing that TexTailor achieves a good view consistency.\", \"It is hard to see obvious improvement from TexTailor in Figure 5, especially comparing with Text2Tex. Perhaps including some bounding boxes and zoom-in patches would help.\", \"## Writing/Delivery\", \"The writing of the papers can be further improved. For example,\", \"Most of the figures in the paper are compressed, resulting in blurriness and sometimes hard to read.\", \"In Fig.1, citing previous methods (i.e., Text2tex and Texture) might make readers easier to check the idea of them.\", \"It is challenging for readers to digest Eqn. (6) - (8). A good strategy to improve it might be similar to what Repaint shows in their paper: demonstrating all the terms in a figure with pictures for a vivid demonstration. Current delivery of newly proposed resampling way in Section 3.1 is hard for readers to understand, especially about the main difference between it and Repaint.\", \"Fig.3 does not deliver a clear message for each component. For example, simply giving readers two equations does not help them to understand what is going on. It might be helpful if the authors can name these two equations in high level.\"], \"questions\": \"1. What are the difficulties for the text-driven methods mentioned in Lines 373 and 374?\\n2. Is LPIPS (Section 4.1, Evaluation metrics) a good metric to evaluate view consistency, as LPIPS is sensitive to spatial information? Given that the view angles are known, would it make more sense to reproject one of the views to another and then compute LPIPS between the projected view and the other one?\\n3. What does the performance preservation loss do in Eqn. (10)? Why would it be effective at a high level? \\n\\nSome of the questions may have been entangled with the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your detailed feedback and for raising your concern about Q4. I hope the response we provided sufficiently addressed your question regarding the use of LPIPS for measuring view-consistency and the potential alternative of reprojecting viewpoints. If there are any remaining points of clarification needed or further concerns, please do not hesitate to let us know. Your insights are greatly valued, and we want to ensure we have addressed your feedback thoroughly.\"}",
"{\"comment\": \"#### 2.RenderPeople Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"A business woman wearing a white blouse with a ribbon detail, light beige pants, nude-tone heels, and neatly tied blonde hair\\\"|[claudia](https://drive.google.com/file/d/1cugJNiHxkIxzZmfapOkUSUt1b5nmX-I4/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1tuUSCLRtZs8meKal50_kkfX0XkTDIlwU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1HkO7HNKWkFh4u_xZfxRTgiyQV35ENyvS/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1cYYHgVoRn5YN-BQGbSXxtVa3ONWLBpIU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1jpUILlxb39eSsAhZnk0kMBt2dTQ_iswl/view?usp=drive_link)|\\n|\\\"A man, wearing a white dress shirt, a black vest, black formal trousers, a black tie, a black belt, and dark formal shoes, with short neatly styled hair\\\"|[eric](https://drive.google.com/file/d/1zu8JFHMlV2oHCu84je9z-hJn1_YUACl0/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1ejcq1bB6djVUmmCqAxbqrDA2q_l_mOsK/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1kQ4JW-Nu14p8qiLvR4RwvB90Iu_elRoI/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1FItS8NhnffnzjoNPPU1or_t6ScWUveO2/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1YibXGY7AicDrCaalaJUChKGRE5nQ6T7c/view?usp=drive_link)|\\n|\\\"A man, wearing a gray short-sleeve T-shirt, blue jeans, white sneakers, and short, dark brown hair styled neatly\\\"|[manuel](https://drive.google.com/file/d/1yRmwmkkZ8c2VmC3fr2B92R2ztWT6BC0a/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1jasYCugvfEaWtXMusmCFExluKTggHtKX/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1kXzUleXzPhJkehtuoNUPhcj9YJzqYSRm/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1ZYkWlhq2cxa-jyxaPeHbdvqTmjQE9-RP/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1epeP2Ms2HC5AkW7uumw9f08H6dk-FVCc/view?usp=sharing)|\\n|\\\"A woman with medium-dark skin tone, wearing a black blazer, a black top, gray pants with a gray tied belt, black heels, and having neatly styled dark hair\\\"|[carla](https://drive.google.com/file/d/1SN1Hft8lAbLU4HHTxB7k6e_WxoG-h-r-/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1jJCNNpgXsB8xJKu6NnGkXicQ3ZDUOCRf/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1OZ-uWrrX06M19CDgpukyCMkM0A5A4l78/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1vc3EaAvNO18t8bUo8l_-KC5hhGTOzDTD/view?usp=sharing)|[carla](https://drive.google.com/file/d/1TlZ2wcokdUOz9Q-KIsSIShCiheelB1ox/view?usp=sharing)|\\n\\n\\n### Reference\\n[1] Cao, Tianshi, et al. \\\"Texfusion: Synthesizing 3d textures with text-guided image diffusion models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[2] Huo, Dong, et al. \\\"TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\\n\\n[3] Chen, Dave Zhenyu, et al. \\\"Text2tex: Text-driven texture synthesis via diffusion models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[4] Richardson, Elad, et al. \\\"Texture: Text-guided texturing of 3d shapes.\\\" ACM SIGGRAPH 2023 conference proceedings. 2023.\\n\\n[5] Shen, Tianchang, et al. \\\"Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis.\\\" Advances in Neural Information Processing Systems 34 (2021): 6087-6101.\\n\\n[6] Lin, Chen-Hsuan, et al. \\\"Magic3d: High-resolution text-to-3d content creation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[7] Chen, Rui, et al. \\\"Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2023.\\n\\n[8] Lugmayr, Andreas, et al. \\\"Repaint: Inpainting using denoising diffusion probabilistic models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[9] Liu, Yuxin, et al. \\\"Text-guided texturing by synchronized multi-view diffusion.\\\" arXiv preprint arXiv:2311.12891 (2023).\\n\\n[10] RenderPeople, https://renderpeople.com/free-3d-people/, 2023\"}",
"{\"comment\": \"Based on my understanding, the current question seems to address the overlapping texture regions between adjacent viewpoints. For example, when rendering from two viewpoints, $v_1$ and $v_2$, a portion of the mesh surface visible from $v_1$ may appear rotated or differently positioned when viewed from $v_2$. This overlap may affect the LPIPS metric and raises concerns about its reliability as a measure of view consistency.\\n\\nHowever, we believe LPIPS remains a valuable metric for measuring view consistency for the following reasons:\\n\\n1. All models compared in the paper were rendered from the same set of viewpoints. As such, any increase in LPIPS due to positional differences in the overlapping regions is equally reflected across all models. Therefore, the differences in LPIPS scores presented in the paper are not a result of positional discrepancies but rather the perceptual differences between the rendered images (i.e., how naturally texture consistency is maintained across viewpoints).\\n\\n2. There are prior works [1] that have used LPIPS as a metric for measuring view consistency, which lends credibility to its application in this context.\\n\\n3. To address the overlap between $v_1$ and $v_2$ through reprojection, one simple approach involves using SIFT to extract feature points and estimate a Homography Matrix via feature matching. However, since this method relies on estimation, it may introduce inaccuracies or distortions, which could undermine the reliability of the results.\\n\\n### Reference\\n[1] Hong, Susung, Donghoon Ahn, and Seungryong Kim. \\\"Debiasing scores and prompts of 2d diffusion for view-consistent text-to-3d generation.\\\" Advances in Neural Information Processing Systems 36 (2023): 11970-11987.\"}",
"{\"comment\": \"Thank you for taking the time to review our work. We greatly appreciate your recognition of the key aspects of our proposed methods and your positive feedback on the experiments. We will revise the manuscript to incorporate your suggestions.\\n\\n> Q1. While sound, the ideas introduced in this work are somewhat limited in scope and the paper fails to be compelling that they are particularly effective. In this sense, I am not convinced about the extent upon which these contributions will be impactful in the literature. Furthermore, the resampling scheme introduced in this paper is not new, as it is borrowed from previous work. Therefore, the ideas introduced here are not particularly novel nor signficant.\", \"i_believe_the_contributions_of_this_paper_are_as_follows\": \"1.\\tWe extend the resampling technique, previously used only in 2D image inpainting, to the field of 3D texture synthesis by applying it to the DDIM non-Markovian process.\\n\\n2.\\tWithout relying on an external dataset of 3D meshes, textures, or text, we propose a novel approach that fine-tunes the model using only a few images that accurately represent the texture of a specific object within the distribution learned by the existing depth-aware T2I model. Even with just a few images inferred by the original model, this approach effectively compensates for textures that are difficult to generate at certain angles and demonstrates the ability to maintain texture consistency across various viewpoints\\n\\n3.\\tTo address the catastrophic forgetting phenomenon\\u2014where a depth-aware T2I model forgets information originally learned from a large-scale dataset when fine-tuned on a smaller one\\u2014we propose the performance preservation loss to mitigate this issue and maintain the model's performance.\\n\\n4.\\tTo eliminate the need for manually configuring optimal camera positions based on object geometry\\u2014a process requiring significant time and effort\\u2014we introduce an adaptive method that adjusts camera positions dynamically based on the extent of texture coverage at each viewpoint.\\n\\nAs you mentioned, point 1 may seem less novel as it recombines existing methods. However, we believe that the other contributions, particularly the ability to generate consistent images across various angles, can be highly applicable not only to the texture synthesis field but also to other 3D domains requiring such consistency. This makes our work a valuable contribution to the ICLR community and a potential foundation for future research.\\n\\n>Q2. Insufficient results are shown on the paper. It is hard to understand the capabilities of the model with the amount of results shown here. In particular, only four results are shown in the comparisons on Objaverse, and these results are not particularly compelling (for example, in the hammer, the method assigns a metal texture to the handle and a wooden texture to the head, which is not correct and arguably a worse result than TEXTure). Only two results are shown in ShapeNet car, and the ablation study is shown exclusively on a single object. Significant more results should be provided to convince the reader that the method is more effective than previous work. \\n\\nTo further demonstrate the effectiveness of our proposed methodology, we will showcase qualitative comparisons for four additional objects from the Objaverse dataset, beyond the four objects already presented in the qualitative results of the paper, in GIF format. \\n\\nAdditionally, to demonstrate that the effectiveness of each method shown in Fig. 7 of the paper is not limited to \\\"a muffin\\\" object, but is also applicable to other types of objects, we will present ablation studies on additional objects as well.\"}",
"{\"comment\": \"Thank you for taking the time to review our work. We greatly appreciate your recognition of the key aspects of our proposed methods and your positive feedback on the experiments. We will revise the manuscript to incorporate your suggestions. If any part of our response is unclear, please do not hesitate to reach out for further clarification. Before addressing the main points of the rebuttal, we have standardized the notation to align with the terminology and references used in our submitted manuscript, rather than those from the cited papers. We kindly ask for your understanding regarding this adjustment.\", \"detailed_response_to_comments\": \"> Q1. The resampling strategy is similar to the [TexFusion:Synthesizing 3D Textures with Text-Guided Image Diffusion Models] [1] and [TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling] [2]. Please explain the difference.\\n\\nIn TexFusion [1], the method progressively transitions between viewpoints during the diffusion process, denoising the areas corresponding to \\\"new\\\" regions at each timestep. When transitioning from one viewpoint to another, inconsistencies in noise levels may arise between the \\\"keep\\\" and \\\"new\\\" regions. To resolve this, TexFusion [1] adds an additional step of noise to the \\\"keep\\\" region to align the noise levels. Over time, after completing this process across multiple viewpoints at each timestep, the areas corresponding to \\\"new\\\" regions are aggregated into a single texture map for that timestep. Consequently, the entire object is textured across all viewpoints, with denoising occurring once per timestep from simple Gaussian noise.\\n\\nIn contrast, our method differs in that, within each timestep of the diffusion process, the process of adding noise and removing it (denoising) is performed multiple times for the \\\"new\\\" region while simultaneously merging it with the \\\"keep\\\" region. This process is repeated $R$ times per timestep.\\n\\nOn the other hand, the \\\"resampling\\\" described in TexGen [2] differs from our definition of resampling. In TexGen [2], resampling involves estimating $z_0$, the final denoised result at each timestep, and using this estimate to predict the noise network's value at the same step. In other words, their resampling strategy focuses on re-sampling the noise network value ${\\\\epsilon}_{\\\\phi}$. In contrast, our method's resampling strategy involves re-sampling the latent vector $z_t$ multiple times.\\n\\nIn summary, compared to TexFusion [1], our method differs in that we perform the process of adding noise and removing it (denoising) multiple times within each timestep (i.e., a difference in the number of sampling iterations at each time step). Furthermore, our resampling approach differs from that of TexGen[2] in both the target (resampling ${\\\\epsilon}_{\\\\phi}$ versus $z_t$) and the method employed.\\n\\n> Q2. In Fig. 1b, it seems that the proposed method is over-smoothed. Please explain the reason.\\n\\nIf the depth-aware diffusion model lacks sufficient training for a specific viewpoint of the object image conditioned on text descriptions or depth maps, or if the texture information applied from previous viewpoints (corresponding to the \\\"keep\\\" regions in Sec. 2.2) is insufficient to generate consistent textures in textureless areas (corresponding to the \\\"new\\\" regions in Sec. 2.2) due to geometric constraints (e.g. objects that appear small from a specific angle), inference for that particular angle while aligning texture properties from the initial viewpoint becomes challenging. While our methodology ensures that the object's properties are preserved, this limitation may lead to a slight loss of texture detail, resulting in an over-smoothed appearance. However, as demonstrated in Text2tex [3] and Texture[4], this issue is effectively mitigated through an update process (Sec.2.2) that refines the detailed regions.\"}",
"{\"comment\": \"#### 1.Objaverse Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"a basketball\\\"|[link](https://drive.google.com/file/d/1PUhh1CFVGfzvHqeNS7-VWlspWygV7M9i/view?usp=drive_link)|[link](https://drive.google.com/file/d/10-Vjeq2lBFIrApNeJ409yoJL3tbdL7M5/view?usp=drive_link)|[link](https://drive.google.com/file/d/1-zmxggvpEi17NlXk4rwqswcguWHToUnx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1tGnNDX-1Z1AWRACQzloKAnW7EJrbkIgV/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ChOOEJCHugwlXSTZMtRPVyR7MOBsXiow/view?usp=drive_link)|\\n|\\\"a cd player\\\"|[link](https://drive.google.com/file/d/1fKhj-8aeGkVKjfb3FwtGUBkuf7FuKiM2/view?usp=sharing)|[link](https://drive.google.com/file/d/1VdXOQslsVru-x2x19tGYfKttT9rI7hit/view?usp=sharing)|[link](https://drive.google.com/file/d/1JE_LDcZm1-oBX4QJN-dIlmBCcNzJz2ne/view?usp=sharing)|[link](https://drive.google.com/file/d/1sRkv6iXBa7e9TVSMcV3hrlbZ8tZo0u_m/view?usp=sharing)|[link](https://drive.google.com/file/d/17k4XOBHya95ZsYwqRs_NONUff9Apw7r4/view?usp=sharing)|\\n|\\\"a desk chair\\\"|[link](https://drive.google.com/file/d/1mm_0cvW2RsvktsLcy07JXAeS5CGVfENO/view?usp=drive_link)|[link](https://drive.google.com/file/d/1yJQkmI6THlNn3hoL-mW2O7D5CI_A6gq9/view?usp=drive_link)|[link](https://drive.google.com/file/d/1pRvTW1eEjVNk7rycoJRAJ8IdvPqHEj46/view?usp=drive_link)|[link](https://drive.google.com/file/d/1Jpx9vtBIV7ymD7zRHLBIPSNAxSItMjFe/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ELVqtUhQbT5XmUcqko0h1pJjMGaicmAl/view?usp=drive_link)|\\n|\\\"a dumpster\\\"|[link](https://drive.google.com/file/d/1HKaoGX_dUN0-TQXlmIAuCbHygUFUV3bD/view?usp=drive_link)|[link](https://drive.google.com/file/d/1f1m09_mCKa9_4KZOGKme8BwsAc_7lkJG/view?usp=drive_link)|[link](https://drive.google.com/file/d/18UZWI-sGhUqXNuFW0SdnIFFO_QNVoiNI/view?usp=drive_link)|[link](https://drive.google.com/file/d/1zN_VombixmEsueJfd6L5kP7zws5fzq0G/view?usp=drive_link)|[link](https://drive.google.com/file/d/1jtiXfphExSoW4wGiDGUBRTD1t4nGVdKi/view?usp=drive_link)|\\n|\\\"a hammer\\\"|[link](https://drive.google.com/file/d/1RSq-vzcPfclttJToHObHypn7hs17VLEd/view?usp=sharing)|[link](https://drive.google.com/file/d/1Kw7aEhSdx3mCXdYkGKRzrGeobqaLJwAW/view?usp=sharing)|[link](https://drive.google.com/file/d/1aexhUd7Nl0Aey3Vg4I-XnoAdlzb3UZPJ/view?usp=drive_link)|[link](https://drive.google.com/file/d/1kxIuyKW3ROjT25KaohiRxpwVAwvQaVWC/view?usp=sharing)|[link](https://drive.google.com/file/d/1GqPePJEubcHmS5LOnMir5OmVIsHv8iTL/view?usp=drive_link)|\\n|\\\"a minivan\\\"|[link](https://drive.google.com/file/d/1yEG9Ni4rrNaZ4kCvL4WRgrY5nU2CxfZS/view?usp=drive_link)|[link](https://drive.google.com/file/d/11nAvcQqOKF-fCaP5-ZAHBNTzD27Fujz2/view?usp=drive_link)|[link](https://drive.google.com/file/d/1sUQbNuGWEw9S85RozmnzII0ByIneFv3W/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ku15fjR4ts_2KJm0Rl6OqGi5h9mP3hqi/view?usp=drive_link)|[link](https://drive.google.com/file/d/1EJcNOro3_wlw6spZ7TYFwtWfZ1Vx3dbL/view?usp=drive_link)|\\n|\\\"a money\\\"|[front](https://drive.google.com/file/d/10rPPgZsWc0iotldevklujifOZMbiz-0X/view?usp=drive_link), [back](https://drive.google.com/file/d/1PaRrWMzpS073RU7bg8NKvnB2wVvnF1ZX/view?usp=drive_link)|[front](https://drive.google.com/file/d/1uNwXkh7O4UASFmm_BhDJJfX8MYWj_My6/view?usp=drive_link), [back](https://drive.google.com/file/d/10x-gy15J7OgJdymqFsalM_R-aF-XCJi4/view?usp=drive_link)|[front](https://drive.google.com/file/d/1rp4m1AdduMMOUyXMHUng5eq8jmQBsF1e/view?usp=drive_link), [back](https://drive.google.com/file/d/1mRWQlFv7YF4SeOrPSq4GrXkro6nveYzC/view?usp=drive_link)|[front](https://drive.google.com/file/d/1EFDwk-csQEiokhU09NiubX-yGC4NNXr7/view?usp=drive_link), [back](https://drive.google.com/file/d/1I_uEbvFRMn9HdfhjgDSNf7miW_fgTLQf/view?usp=drive_link)|[front](https://drive.google.com/file/d/1SRVIlpRovqU2o2d1XYQMkdHXsgd_TQxC/view?usp=drive_link), [back](https://drive.google.com/file/d/1Oc45Uj5M5ySbKikOYqvl6jA0KX9sEB6O/view?usp=drive_link)|\\n|\\\"a sushi\\\"|[link](https://drive.google.com/file/d/15QWzVwVvJ-r7rFqUcVXGhnmNGRMdUClx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1d7hxtTi8NFio4kAXeMJjIGyrIaOwmfmA/view?usp=drive_link)|[link](https://drive.google.com/file/d/1uZerJAnPSwV-bAw0e3JXwBpaPsL96XzK/view?usp=drive_link)|[link](https://drive.google.com/file/d/1VMeuBZoB-CuDDc0ekSCb5ezrXQLeVNgr/view?usp=drive_link)|[link](https://drive.google.com/file/d/1gSCl1HYbhxIK4ovEEs0OJVF1CgqIgHLg/view?usp=drive_link)|\"}",
"{\"comment\": [\"Thanks for your responses! Most of my concerns are revolved. I'd like to increase my rating from 3 to 5.\", \"I still have a concern about Q4. I'd like to clarify my point:\", \"LPIPS, although as a \\\"corase\\\" perceptual loss, still shows high error for the pixels that are not aligned well. For example, if you rotate your image by 30 degrees, you will see the LPIPS between the original image and the rotated one becomes obvious.\", \"The authors mention that they measure the view-consistency by computing LPIPS between rendered images across different viewpoints (L400-L402). In that sense, will LPIPS becomes unreliable since it is sensitive to this \\\"spatial shift\\\" in the example, although they contain high-level information?\", \"To better compute the view-consistency, a possible way could be that we project the image from one viewpoint $v_1$ to another one $v_2$, and then compute the difference (you could use LPIPS) between the rendered image at $v_2$ and the \\\"reprojected\\\" image from $v_1$.\"]}",
"{\"comment\": \"> Q11. I believe that computational costs should be compared more explictly with previous work, so as to better understand the quality/cost pareto frontier in this line of work.\\n\\n|Method|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|Runtime(minutes)|46|24|2|25|90|\\n|GPU memory usage(MB)|10670|12077|12228|30905|22478|\\n\\nAs shown, TexTailor has the longest runtime among the methods, which we acknowledge as one of the key limitations discussed in the paper. To overcome this limitation, accelerating the fine-tuning process is a primary focus for our future work.\\n\\n> Q12. Limitations are not adequately addressed or discussed. From the paper, it seems like the only limitation of the proposed method is its computational cost. However, the results shown in the paper indicate that the method is by no means perfect and it struggles with consistently assigning the appropriate texture to different parts of the object, among other limitations. These should be mentioned more explicitly. I suggest the paper should provide a more upfront discussion of its limitations.\\n\\nBeyond computational cost, we identify the following limitations of TexTailor:\\n\\n(1) Dependence on the Quality of the Five Training Images: The overall texture quality heavily relies on the quality of the five images used to fine-tune the depth-aware T2I diffusion model. Even with the resampling method and the use of viewpoints close to the initial one, certain angles of the object may still produce textures with inconsistent properties or images misaligned with the depth condition. Fine-tuning the model using such suboptimal images can potentially degrade the texture quality instead of improving it.\\n\\n(2) Repetitive Patterns Across Views: Patterns frequently observed in viewpoints close to the initial one tend to repeat throughout the object. For instance, in the case of an alarm clock, the clock hands learned from the initial viewpoint may appear as texture patterns on the sides or even the front of the clock, resulting in repetitive and unrealistic textures.\\n \\n\\n### Reference\\n[1] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[2] Casas, Dan, and Marc Comino-Trinidad. \\\"Smplitex: A generative model and dataset for 3d human texture estimation from single image.\\\" arXiv preprint arXiv:2309.01855 (2023).\\n\\n[3] Wang, Weijie, et al. \\\"UVMap-ID: A Controllable and Personalized UV Map Generative Model.\\\" Proceedings of the 32nd ACM International Conference on Multimedia. 2024.\\n\\n[4] Liu, Yufei, et al. \\\"TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation.\\\" arXiv preprint arXiv:2403.12906 (2024).\\n\\n[5] RenderPeople, https://renderpeople.com/free-3d-people/, 2023\\n\\n[6] Lugmayr, Andreas, et al. \\\"Repaint: Inpainting using denoising diffusion probabilistic models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\"}",
"{\"comment\": \"> Q3. I do not get the correlation of the third paragraph in Section I with the main content. I think the geometry conversion is not a problem to be solved in this paper. And the SDS can be directly applied on the mesh (DMTET) which does not need conversion.\\n\\nThis paper focuses on texture map generation based on explicit representations in 3D modeling. To emphasize this, the third paragraph discusses the advantages of explicit representations over implicit ones, such as their ease of integration into graphics engines and real-time applications. However, as noted in the review, the explanation regarding the necessity of generating texture maps within explicit representations may have been somewhat unclear. With your permission, I propose revising this section to provide greater clarity as follows:\\n\\n\\\"While these methodologies provide both geometry and texture, converting implicit neural representations into explicit formats, such as meshes, remains necessary for integration into graphics engines and real-time applications. Recently, DMTet [5] has enabled precise mesh geometry extraction from implicit representations by leveraging a signed distance field and the Marching Tetrahedra algorithm. However, in texture synthesis, texture unwrapping often leads to inconsistent mappings, which can degrade the visual quality of the output or necessitate additional texture synthesis steps.[6][7]\\\"\\n\\n> Q4. The authors mentioned that \\\" we can achieve high-quality texture with only 30 steps, significantly fewer than the 250 steps required by the original resampling method for a single view.\\\" Other methods like texture and text2tex only sampled no more than 50 steps for each view. Where is this \\\"250\\\" from.\\n\\nLugmayr et al. (2022) [8], who first proposed the resampling method for denoising diffusion probabilistic models, stated that 250 timesteps are required to achieve a proper inpainting effect with their resampling strategy. In contrast, by incorporating the DDIM scheme in our methodology, we reduced the number of timesteps to 30 without compromising quality.\\n\\n> Q5. What is the meaning of resampling steps? Does it mean you have to sample R steps for each view at each timestep?\\n\\nThat's correct. What I meant is that noise addition and removal are repeated $R$ times at each timestep. I apologize for the confusion. Do you think changing the name from \\\"resampling\\\" to \\\"multiple resampling\\\" might make it easier to understand?\\n\\n> Q6. The authors used \\\"resampled images near the first viewpoint to extract images of the same object from different angles in the output domain of the diffusion model\\\". How to make sure that the viewpoints near the first viewpoint maintain the same style as the first view.\\n\\nFor viewpoints close to the first viewpoint, the texture applied from the first viewpoint is repeatedly conditioned into the diffusion process through the resampling strategy. This increases the likelihood of generating textures in a style similar to the first viewpoint. Furthermore, for the five viewpoints nearest to the first viewpoint, we assumed that the textures would be generated in the same style as the first viewpoint. This assumption is based on the fact that, at the end of processing each of these five viewpoints, ControlNet is trained using the resampled texture images synthesized from the previous viewpoints, ensuring consistency across the textures generated for these five viewpoints.\\n\\n> Q7. In the loss function of Eqn. 10, the target is constraining the new noise estimation to be the same as the original noise estimation. The what is the meaning of training? The optimal case is keeping the original model unchanged.\\n\\nOur final loss is defined in Eq. (11) by combining Eq. (9) and Eq. (10). Here, $\\\\epsilon$ and $\\\\epsilon_{\\\\text{orig}}$ are different, so the optimal case cannot simply be assumed to preserve the parameters of the original model. The reason for introducing Eq. (10) is that, when training ControlNet with a limited number of images (in this case, rendered images from viewpoints close to the first viewpoint), there is a high risk of the model forgetting the information it had previously learned. This leads to a phenomenon known as catastrophic forgetting, which can impair ControlNet's capabilities. To prevent this, Eq. (10) ensures that the parameters of ControlNet remain close to the values learned during its original training with a large dataset.\"}",
"{\"comment\": \"> Q2. While the paper discusses performance preservation loss qualitatively, a quantitative analysis of its impact on quality would clarify its specific role. Including an ablation study of the performance preservation loss in Table 2 could better highlight its contribution to TexTailor\\u2019s performance.\\n\\nHere is the table presenting the ablation study results for the performance preservation loss, as per your suggestion.\\n\\n|w/Resampling|w/Training|w/Performance preservation loss|w/Adaptive view refinement|LPIPS $\\\\downarrow$|FID $\\\\downarrow$|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|$\\\\checkmark$|x|x|x|38.89|30.924|\\n|$\\\\checkmark$|$\\\\checkmark$|x|x|39.849|53.85|\\n|$\\\\checkmark$|$\\\\checkmark$|$\\\\checkmark$|x|38.00|30.567|\\n|$\\\\checkmark$|$\\\\checkmark$|$\\\\checkmark$|$\\\\checkmark$|37.89|29.998|\\n\\nAs shown in the results, fine-tuning without introducing the performance preservation loss causes the existing capabilities of the depth-aware diffusion model to deteriorate, leading to worse performance metrics. These findings are also supported qualitatively, further reinforcing our claims.\\n\\n> Q3. Since the viewpoint refinement uses a fixed threshold, how sensitive is the model\\u2019s performance to changes in this parameter? \\n\\nThank you for raising this insightful question. We agree that evaluating the sensitivity of the model's performance to changes in the fixed threshold used in viewpoint refinement would provide valuable insights. However, due to time and resource constraints during the rebuttal period, we were unable to conduct these additional experiments. We acknowledge the importance of this analysis and plan to address it as part of our future work to further refine and validate our approach. We sincerely appreciate your thoughtful feedback.\"}",
"{\"summary\": \"This paper introduces a method for text-driven 3D object texturing. Previous work on this important topic fail in some areas, according to the paper, including: Consistency and the gradual change in textures assigned to the object. The paper aims to solve these issues, by introducing 2 ideas: First, the model leverages a resampling scheme for better integration of previously generated texture during the diffusion process, and second, the model fine-tunes a depth-aware diffusion model with these resampling textures. With these contributions, the method is said to achieve higher quality and consistency than previous work, measured on a set of datasets and perceptual metrics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper adequately identifies problems in previous methods for text-driven object texturing, including lack of texture consistency and graduality in texture changes. The origin of this problems are identified as being caused by insufficient integration, predefinition of camera positions, and autorregresion. The paper introduces changes to these methods, to enhance their quality and consistency. This is an important line of research, as these works are becoming more prevalent in the literature and industrial applications.\", \"This paper tackles an important and salient problem in the literature.\", \"The results shown in the paper indeed suggest that the proposed method provides less gradual changes in texture properties. In objects with different parts, TexTailor shows superior performance in assigning different texture to different parts, than previous work do.\", \"The proposed method is sound, and the ideas proposed here are very well suited for the task the paper is aiming to solve. In this sense, the paper is correct as far as I am familiar with the literature and the problems in 3D content generation.\", \"This paper is well written and easy to follow. The problems identified in previous work are clearly stated, and the ideas to solve them are easy to understand and very well explained.\", \"Code is provided as a supplementary material, which should greatly enhance reproducibility.\"], \"weaknesses\": [\"While sound, the ideas introduced in this work are somewhat limited in scope and the paper fails to be compelling that they are particularly effective. In this sense, I am not convinced about the extent upon which these contributions will be impactful in the literature. Furthermore, the resampling scheme introduced in this paper is not new, as it is borrowed from previous work. Therefore, the ideas introduced here are not particularly novel nor signficant.\", \"Insufficient results are shown on the paper. It is hard to understand the capabilities of the model with the amount of results shown here. In particular, only four results are shown in the comparisons on Objaverse, and these results are not particularly compelling (for example, in the hammer, the method assigns a metal texture to the handle and a wooden texture to the head, which is not correct and arguably a worse result than TEXTure). Only two results are shown in ShapeNet car, and the ablation study is shown exclusively on a single object. Significant more results should be provided to convince the reader that the method is more effective than previous work.\", \"I am unconvinced about the metrics used in this paper. While standard for 3D object texturing work, LPIPS and FID do not adequately measure text-to-image alignment. CLIP-based metrics should be used in conjunction to the ones shown in this paper, to be more informative about how well this model is generating results aligned with the prompts. While visually more consistent than previous work, this model seems to struggle more than previous methods (particularly TEXTure and Text2Text) in asigning the correct texture to each part of the object. This is not something that LPIPS and FID can measure correctly.\", \"A user study should be provided for better comparisons between methods, across a bunch of dimensions, including alignment, realism, quality, consistency, etc.\", \"The quantitative results are not particularly convincing. The ablation study does not show significant improvements across the metrics used, particularly LPIPS, and without standard deviations of the errors it is hard to understand whether the improvements are actually statistically significant. Therefore, the ablation fails to convince that the proposed contributions are actually valuable and effective.\", \"No results are shown on 3D human avatar texturing, which is a very closely related and relevant line of work.\", \"Related to the previous point, the analysis of the related work is lacking on a set of areas. The most relevant is the work on 3D human texturing. Relevant work include: SMPLitex: A Generative Model and Dataset for 3D Human Texture Estimation from Single Image (BMVC 2023), UVMap-ID: A Controllable and Personalized UV Map Generative Model (ACMMM 2024), TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation (ECCV 2024), etc. Besides, 2D texturing methods with generative models should also be included as part of the related work.\", \"Limitations are not adequately addressed or discussed. From the paper, it seems like the only limitation of the proposed method is its computational cost. However, the results shown in the paper indicate that the method is by no means perfect and it struggles with consistently assigning the appropriate texture to different parts of the object, among other limitations. These should be mentioned more explicitly.\", \"Contributions are very overstated. Sentences like \\\"... demonstrate the superior performance of TexTailor in .... \\\" or \\\" ... surpases SOTA texture synthesis methods driven by language cues\\\" should be empirically demonstrated or removed altogether.\", \"The paper suggests some reasons why previous methods fail (autorregressive inference, integration of previous information, fixed camera positions, etc), but it fails to provide adequate evidence that these actually limiting factors.\"], \"questions\": [\"Can the authors provide a user study that measures consistency, alignment, quality, and realism? This should provide a better idea on the quality of the results on the actual goals that the paper aims to achieve.\", \"I believe that computational costs should be compared more explictly with previous work, so as to better understand the quality/cost pareto frontier in this line of work.\", \"I suggest the paper should provide a CLIP-guided text-image alignment metric.\", \"I suggest the paper should provide a more upfront discussion of its limitations.\", \"The paper should include a detailed analysis of 2D texturing models.\", \"The paper should include a detailed analysis of text-to-avatar models, as well as quantitative and qualitative comparisons.\", \"How does the model behave with non-diffuse objects? Very few glossy, metallic, or translucent objects are shown.\", \"The paper should include many more results, at least in the supplementary material.\", \"Results should include standard deviations to better understand the differences between methods in terms of LPIPS, FLIP, etc.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Thank you for taking the time to review our work. We greatly appreciate your recognition of the key aspects of our proposed methods and your positive feedback on the experiments. We will revise the manuscript to incorporate your suggestions.\\n\\n>Q1. While effective, the approach primarily combines existing techniques, with limited emphasis on novel contributions. The paper could be strengthened by enhancing the resampling scheme or accelerating the fine-tuning phase.\", \"i_believe_the_contributions_of_this_paper_are_as_follows\": \"1.\\tWe extend the resampling technique, previously used only in 2D image inpainting, to the field of 3D texture synthesis by applying it to the DDIM non-Markovian process.\\n\\n2.\\tWithout relying on an external dataset of 3D meshes, textures, or text, we propose a novel approach that fine-tunes the model using only a few images that accurately represent the texture of a specific object within the distribution learned by the existing depth-aware T2I model. Even with just a few images inferred by the original model, this approach effectively compensates for textures that are difficult to generate at certain angles and demonstrates the ability to maintain texture consistency across various viewpoints\\n\\n3.\\tTo address the catastrophic forgetting phenomenon\\u2014where a depth-aware T2I model forgets information originally learned from a large-scale dataset when fine-tuned on a smaller one\\u2014we propose the performance preservation loss to mitigate this issue and maintain the model's performance.\\n\\n4.\\tTo eliminate the need for manually configuring optimal camera positions based on object geometry\\u2014a process requiring significant time and effort\\u2014we introduce an adaptive method that adjusts camera positions dynamically based on the extent of texture coverage at each viewpoint.\\n\\nAs you mentioned, point 1 may seem less novel as it recombines existing methods. However, we believe that the other contributions, particularly the ability to generate consistent images across various angles, can be highly applicable not only to the texture synthesis field but also to other 3D domains requiring such consistency. This makes our work a valuable contribution to the ICLR community and a potential foundation for future research.\\n\\nWe anticipate that combining LoRA with ControlNet could shorten the fine-tuning phase while maintaining quality. However, due to time and resource constraints during the rebuttal period, we are unable to conduct all the necessary experiments at this time. We plan to explore this approach as part of our future work.\\n\\nAdditionally, to further demonstrate the effectiveness of our proposed methodology, we will showcase qualitative comparisons for four additional objects from the Objaverse dataset, beyond the four objects already presented in the qualitative results of the paper, in GIF format. Furthermore, we will include qualitative comparison results for clothed human meshes from RenderPeople [10], also presented in GIF format.\\n\\nThe text prompts for RenderPeople's clothed human meshes were created based on the ground truth images provided by RenderPeople, which were designed by professional designers.\"}",
"{\"summary\": \"This paper proposes TexTailor, a method for text-to-texture synthesis utilizing an inpainting approach to achieve view-consistent textures. To address common challenges in texture generation, TexTailor introduces a resampling scheme and fine-tuning to maintain texture consistency across viewpoints. Furthermore, it employs adaptive viewpoint refinement for efficient viewpoint sampling.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes TexTailor to address view-consistent texture synthesis by combining inpainting with resampling and fine-tuning.\\n2. Method and results are presented clearly and logically, making the paper easy to follow.\", \"weaknesses\": \"1. While effective, the approach primarily combines existing techniques, with limited emphasis on novel contributions. The paper could be strengthened by enhancing the resampling scheme or accelerating the fine-tuning phase.\", \"questions\": \"1. Could the authors clarify the novel aspects of TexTailor? The current version of TexTailor appears to be a combination of existing methods. It would be helpful if they could elaborate on any unique modifications within the resampling scheme or improvements made to accelerate the fine-tuning process.\\n2. While the paper discusses performance preservation loss qualitatively, a quantitative analysis of its impact on quality would clarify its specific role. Including an ablation study of the performance preservation loss in Table 2 could better highlight its contribution to TexTailor\\u2019s performance.\\n3. Since the viewpoint refinement uses a fixed threshold, how sensitive is the model\\u2019s performance to changes in this parameter?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposed a novel architecture that can generate more consistent 3D texture than TEXTure and Text2Tex.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Proposed a better approach for viewpoint sampling.\\n2. The performance of the proposed method is better than listed SOTAs.\", \"weaknesses\": \"1. The resampling strategy is similar to the [TexFusion:Synthesizing 3D Textures with Text-Guided Image Diffusion Models] and [TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling]. Please explain the difference.\\n2. In Fig. 1b, it seems that the proposed method is over-smoothed. Please explain the reason.\\n3. Please answer the following questions.\", \"questions\": \"1. I do not get the correlation of the third paragraph in Section I with the main content. I think the geometry conversion is not a problem to be solved in this paper. And the SDS can be directly applied on the mesh (DMTET) which does not need conversion.\\n2. The authors mentioned that \\\" we can achieve high-quality texture with only 30 steps, significantly fewer than the 250 steps required by the original resampling method for a single view.\\\" Other methods like texture and text2tex only sampled no more than 50 steps for each view. Where is this \\\"250\\\" from.\\n3. What is the meaning of resampling steps? Does it mean you have to sample R steps for each view at each timestep?\\n4. The authors used \\\"resampled images near the first viewpoint to extract images of the same object from different angles in the output domain of the diffusion model\\\" . How to make sure that the viewpoints near the first viewpoint maintain the same style as the first view.\\n5. In the loss function of Eqn. 10, the target is constraining the new noise estimation to be the same as the original noise estimation. The what is the meaning of training? The optimal case is keeping the original model unchanged.\\n6. Any 3D results of the method? I prefer to see the rendered 360-degree videos of results.\\n7. The attention feature injection as in [Text-Guided Texturing by Synchronized Multi-View Diffusion] can help to reduce the problem of the autoregressive inpainting. Have you tried this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"#### RenderPeople Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"A business woman wearing a white blouse with a ribbon detail, light beige pants, nude-tone heels, and neatly tied blonde hair\\\"|[claudia](https://drive.google.com/file/d/1cugJNiHxkIxzZmfapOkUSUt1b5nmX-I4/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1tuUSCLRtZs8meKal50_kkfX0XkTDIlwU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1HkO7HNKWkFh4u_xZfxRTgiyQV35ENyvS/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1cYYHgVoRn5YN-BQGbSXxtVa3ONWLBpIU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1jpUILlxb39eSsAhZnk0kMBt2dTQ_iswl/view?usp=drive_link)|\\n|\\\"A man, wearing a white dress shirt, a black vest, black formal trousers, a black tie, a black belt, and dark formal shoes, with short neatly styled hair\\\"|[eric](https://drive.google.com/file/d/1zu8JFHMlV2oHCu84je9z-hJn1_YUACl0/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1ejcq1bB6djVUmmCqAxbqrDA2q_l_mOsK/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1kQ4JW-Nu14p8qiLvR4RwvB90Iu_elRoI/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1FItS8NhnffnzjoNPPU1or_t6ScWUveO2/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1YibXGY7AicDrCaalaJUChKGRE5nQ6T7c/view?usp=drive_link)|\\n|\\\"A man, wearing a gray short-sleeve T-shirt, blue jeans, white sneakers, and short, dark brown hair styled neatly\\\"|[manuel](https://drive.google.com/file/d/1yRmwmkkZ8c2VmC3fr2B92R2ztWT6BC0a/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1jasYCugvfEaWtXMusmCFExluKTggHtKX/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1kXzUleXzPhJkehtuoNUPhcj9YJzqYSRm/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1ZYkWlhq2cxa-jyxaPeHbdvqTmjQE9-RP/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1epeP2Ms2HC5AkW7uumw9f08H6dk-FVCc/view?usp=sharing)|\\n|\\\"A woman with medium-dark skin tone, wearing a black blazer, a black top, gray pants with a gray tied belt, black heels, and having neatly styled dark hair\\\"|[carla](https://drive.google.com/file/d/1SN1Hft8lAbLU4HHTxB7k6e_WxoG-h-r-/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1jJCNNpgXsB8xJKu6NnGkXicQ3ZDUOCRf/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1OZ-uWrrX06M19CDgpukyCMkM0A5A4l78/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1vc3EaAvNO18t8bUo8l_-KC5hhGTOzDTD/view?usp=sharing)|[carla](https://drive.google.com/file/d/1TlZ2wcokdUOz9Q-KIsSIShCiheelB1ox/view?usp=sharing)|\\n\\n>Q7. How does the model behave with non-diffuse objects? Very few glossy, metallic, or translucent objects are shown.\\n\\nWe will provide two GIF results for each type of object you mentioned, showcasing the outcomes.\\n\\n#### 1. glossy objects\\n|Text Prompt|\\\"a bowl\\\"|\\\"a cup\\\"|\\n|:-----:|:-----:|:-----:|\\n|Link|[link](https://drive.google.com/file/d/1qyHWpLgb26THiM0IHUhnJDa24fanzpnH/view?usp=drive_link)|[link](https://drive.google.com/file/d/1jkEb-Jo0uFfTsn4Q5FX7twzpVdMVJDnV/view?usp=drive_link)|\\n\\n#### 2. metallic objects\\n|Text Prompt|\\\"a faucet\\\"|\\\"a aerosel can\\\"|\\n|:-----:|:-----:|:-----:|\\n|Link|[link](https://drive.google.com/file/d/1_JN5iot_k8loDKTHfAZbbn5tyznbtYdB/view?usp=drive_link)|[link](https://drive.google.com/file/d/1lWl5BTPW9vFro78TicnukQBcJDa10UOK/view?usp=drive_link)|\\n\\n#### 3. translucent objects\\n|Text Prompt|\\\"a lightbulb\\\"|\\\"a candle\\\"|\\n|:-----:|:-----:|:-----:|\\n|Link|[link](https://drive.google.com/file/d/13E2HoLO1URUeeLCaM2NzPAxBYLK-S00e/view?usp=sharing)|[link](https://drive.google.com/file/d/1efGXb6KB0hHrfKuosUYukawbNckXCeax/view?usp=drive_link)|\\n\\n> Q9. The paper should include a detailed analysis of 2D texturing models.\\n\\nAre you referring to the four texture models mentioned in our paper when you say \\\"2D texturing models\\\"? If so, does the \\\"detailed analysis\\\" refer to the qualitative results presented in the paper?\\n\\n> Q10. Contributions are very overstated. Sentences like \\\"... demonstrate the superior performance of TexTailor in .... \\\" or \\\" ... surpases SOTA texture synthesis methods driven by language cues\\\" should be empirically demonstrated or removed altogether.\\n\\nThank you for pointing this out. We understand your concern regarding the overstatement of contributions. Once we have gathered all the reviewers' feedback, we will revise the manuscript comprehensively for the final version, ensuring that all claims are either empirically demonstrated or appropriately adjusted. We sincerely appreciate your constructive input.\"}",
"{\"comment\": \"#### 2. Ablation study\\n|Figure|\\n|:-----:|\\n|[link](https://drive.google.com/file/d/1I8zhpnybAV0g-dka4Ct2Z028jFyOHD4C/view?usp=drive_link)|\\n\\n##### (1) Effects of resampling. \\n\\nWhen observing the texture of \\\"a basket,\\\" the transition from $v_0$ to $v_1$ reveals a change in the texture of the inside of the basket in the left part. This occurs in the baseline method because the previously synthesized texture visible from the current viewpoint is only merged once per timestep during the diffusion process. In contrast, our methodology improves this by utilizing the resampling strategy proposed in the Repaint[6] paper, which allows the texture to be merged multiple times per timestep, resulting in a more consistent outcome.\\n\\n##### (2) Effects of training with resampled texture. \\n\\nWhen observing the object \\\"a briefcase,\\\" applying the Resampling method generates consistent textures for viewpoints that are close to the initial viewpoint $v_0$, such as the nearby viewpoint $v_5$. However, as the viewpoint moves further away from $v_0$, variations in texture properties begin to appear, and at the opposite viewpoint $v_9$, the texture properties have completely changed. In contrast, when the depth-aware T2I model is fine-tuned using five resampled texture images from viewpoints adjacent to $v_0$, the model fits to the distribution of the resampled textures, leading to noticeable improvements in consistency.\\n \\n\\n##### (3) Effects of adaptive view refinement.\\nWhen observing the object \\\"a cappuccino,\\\" transitioning from $v_{11}$ to $v_{12}$ on the left part reveals an issue where the bottom of the cup is generated incorrectly. Since there is no texture information from the previous viewpoint, the model generates the top of the cappuccino instead of the bottom. However, by using the adaptive view refinement technique on the right part, an intermediate viewpoint ($v_{13}$) is automatically added. This not only provides the texture information from $v_{11}$ to guide a more natural texture synthesis but also eliminates the need for the tedious process of manually configuring optimal camera positions.\\n\\n> Q3. I am unconvinced about the metrics used in this paper. While standard for 3D object texturing work, LPIPS and FID do not adequately measure text-to-image alignment. CLIP-based metrics should be used in conjunction to the ones shown in this paper, to be more informative about how well this model is generating results aligned with the prompts. While visually more consistent than previous work, this model seems to struggle more than previous methods (particularly TEXTure and Text2Text) in asigning the correct texture to each part of the object. This is not something that LPIPS and FID can measure correctly. I suggest the paper should provide a CLIP-guided text-image alignment metric.\\n\\nAs per your suggestion, we calculated the average cosine similarity between the prompt and image CLIP[1] embeddings for the four models compared in this paper.\\n\\n|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|0.2334|0.2295|0.2342|0.2315|0.2323|\\n\\nAs shown in the results, while TexTailor does not achieve the highest value compared to other models, the differences between the values across models are not significant enough to be considered meaningful. Therefore, we plan to address this point when incorporating all reviewers' feedback in the final version. Thank you for your valuable feedback and thoughtful suggestion.\\n\\n> Q4. A user study should be provided for better comparisons between methods, across a bunch of dimensions, including alignment, realism, quality, consistency, etc. Can the authors provide a user study that measures consistency, alignment, quality, and realism? This should provide a better idea on the quality of the results on the actual goals that the paper aims to achieve.\\n\\nThank you for suggesting a user study to evaluate consistency, alignment, quality, and realism. We agree that such a study would provide valuable insights into the quality of our results and how well they align with the goals of the paper. However, due to resource and time constraints, we were unable to recruit a sufficient number of users to conduct a statistically meaningful user study during this submission cycle. That said, we acknowledge the importance of this evaluation and plan to include a comprehensive user study in future work to further substantiate our findings.\"}",
"{\"comment\": \"#### 2.RenderPeople Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"A business woman wearing a white blouse with a ribbon detail, light beige pants, nude-tone heels, and neatly tied blonde hair\\\"|[claudia](https://drive.google.com/file/d/1cugJNiHxkIxzZmfapOkUSUt1b5nmX-I4/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1tuUSCLRtZs8meKal50_kkfX0XkTDIlwU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1HkO7HNKWkFh4u_xZfxRTgiyQV35ENyvS/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1cYYHgVoRn5YN-BQGbSXxtVa3ONWLBpIU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1jpUILlxb39eSsAhZnk0kMBt2dTQ_iswl/view?usp=drive_link)|\\n|\\\"A man, wearing a white dress shirt, a black vest, black formal trousers, a black tie, a black belt, and dark formal shoes, with short neatly styled hair\\\"|[eric](https://drive.google.com/file/d/1zu8JFHMlV2oHCu84je9z-hJn1_YUACl0/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1ejcq1bB6djVUmmCqAxbqrDA2q_l_mOsK/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1kQ4JW-Nu14p8qiLvR4RwvB90Iu_elRoI/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1FItS8NhnffnzjoNPPU1or_t6ScWUveO2/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1YibXGY7AicDrCaalaJUChKGRE5nQ6T7c/view?usp=drive_link)|\\n|\\\"A man, wearing a gray short-sleeve T-shirt, blue jeans, white sneakers, and short, dark brown hair styled neatly\\\"|[manuel](https://drive.google.com/file/d/1yRmwmkkZ8c2VmC3fr2B92R2ztWT6BC0a/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1jasYCugvfEaWtXMusmCFExluKTggHtKX/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1kXzUleXzPhJkehtuoNUPhcj9YJzqYSRm/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1ZYkWlhq2cxa-jyxaPeHbdvqTmjQE9-RP/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1epeP2Ms2HC5AkW7uumw9f08H6dk-FVCc/view?usp=sharing)|\\n|\\\"A woman with medium-dark skin tone, wearing a black blazer, a black top, gray pants with a gray tied belt, black heels, and having neatly styled dark hair\\\"|[carla](https://drive.google.com/file/d/1SN1Hft8lAbLU4HHTxB7k6e_WxoG-h-r-/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1jJCNNpgXsB8xJKu6NnGkXicQ3ZDUOCRf/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1OZ-uWrrX06M19CDgpukyCMkM0A5A4l78/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1vc3EaAvNO18t8bUo8l_-KC5hhGTOzDTD/view?usp=sharing)|[carla](https://drive.google.com/file/d/1TlZ2wcokdUOz9Q-KIsSIShCiheelB1ox/view?usp=sharing)|\"}",
"{\"comment\": \"> Q5. The quantitative results are not particularly convincing. The ablation study does not show significant improvements across the metrics used, particularly LPIPS, and without standard deviations of the errors it is hard to understand whether the improvements are actually statistically significant. Therefore, the ablation fails to convince that the proposed contributions are actually valuable and effective. and The paper suggests some reasons why previous methods fail (autorregressive inference, integration of previous information, fixed camera positions, etc), but it fails to provide adequate evidence that these actually limiting factors. Results should include standard deviations to better understand the differences between methods in terms of LPIPS, FLIP, etc.\\n\\nI agree with your point. Not including the standard deviations of the errors in quantitative comparisons can indeed reduce the reliability of the experiments. However, we regret that due to time and resource constraints during the rebuttal period, we were unable to repeat all experiments multiple times, and we sincerely apologize for this limitation.\\n\\nWe greatly value your feedback and will make an effort to include standard deviations in future experiments to enhance reliability. Additionally, we kindly ask for your understanding, as the models compared in this paper also did not include standard deviations in their results. Thank you for your thoughtful feedback.\\n\\n> Q6. No results are shown on 3D human avatar texturing, which is a very closely related and relevant line of work. Related to the previous point, the analysis of the related work is lacking on a set of areas. The most relevant is the work on 3D human texturing. Relevant work include: SMPLitex: A Generative Model and Dataset for 3D Human Texture Estimation from Single Image (BMVC 2023), UVMap-ID: A Controllable and Personalized UV Map Generative Model (ACMMM 2024), TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation (ECCV 2024), etc. Besides, 2D texturing methods with generative models should also be included as part of the related work. The paper should include a detailed analysis of text-to-avatar models, as well as quantitative and qualitative comparisons.\\n\\nI completely agree with your point. 3D human avatar texturing is a highly active research area and holds significant importance in various applications. However, unfortunately, the methods you mentioned\\u2014SMPLitex[2], UVMap-ID[3], and TexDreamer[4]\\u2014are aimed at image-guided texture synthesis rather than text-guided texture synthesis, making direct comparisons challenging.\\n\\nAdditionally, we were unable to find an appropriate dataset combining 3D clothed human meshes with corresponding text descriptions for evaluation. As such, we conducted qualitative comparisons on four clothed human meshes provided by RenderPeople[5], using the four models currently compared in the paper. We kindly ask for your understanding regarding this limitation.\\n\\nThe text prompts for RenderPeople's clothed human meshes were created based on the ground truth images provided by RenderPeople, which were designed by professional designers.\"}",
"{\"metareview\": \"Summary:\\n- This paper presents a method for creating view-consistent texture on a 3D object using text prompts. The basic approach follows an inpainting framework and proposes a resampling method and fine-tuning to improve texture consistency across viewpoints.\", \"strength\": [\"\\\"solid and correct, analysis is thorough and well-motivated\\\"\", \"well-written paper.\"], \"weakness\": [\"This paper primarily combines existing techniques. The technical contributions are limited.\"], \"justification\": [\"All four reviewers are leaning positive about this paper, particularly after the rebuttal, where the authors' rebuttal adequately addressed several initial concerns from the reviewers. While the paper does not present intrinsically novel techniques, the integration of existing methods is well-motivated and well-executed. The evaluation is solid and validates the claims of improved view-consistent texture generation. The AC agrees with the reviewers and recommends to accept.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewer uECh asked for classifications of the paper contributions, ablation study for performance preservation loss, and the sensitivity in the threshold parameter.\\n\\nThe authors provided a detailed response with additional ablation study results. Reviewer uECh still considered the paper's technical contribution incremental but is satisfied with the responses. The rebuttal did not address the sensitivity in the threshold parameter, but the AC considers a minor issue. \\n\\nReviewer 3JDj commented on the novelty of the resampling scheme, insufficient results (e.g., 3D human texturing), and the metrics. The rebuttal responded by providing additional results on the RenderPeople dataset, additional types of objects, and runtime/memory comparisons. Reviewer 3JDj still has concerns about the scope and contribution but decided to raise the rating since most initial concerns were adequately addressed. \\n\\nReviewer ujQC was initially negative about this work, highlighting issues in the method exposition and experiments. Reviewer ujQC finds the explanations about the metrics convincing and raises the rating to borderline positive. \\n\\nReviewer NBoV comments that the resampling strategy is similar to several prior work and asks for clarification. Reviewer NBoV did not respond to the author's rebuttal. \\n\\nOverall, the reviewers appreciate the authors' detailed responses on the initial concerns. The additional explanations and results further validate the method's effectiveness and improvement over prior art. The rebuttal successfully convinced two reviewers to increase their scores.\"}",
"{\"comment\": \"#### 1.Objaverse Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"a basketball\\\"|[link](https://drive.google.com/file/d/1PUhh1CFVGfzvHqeNS7-VWlspWygV7M9i/view?usp=drive_link)|[link](https://drive.google.com/file/d/10-Vjeq2lBFIrApNeJ409yoJL3tbdL7M5/view?usp=drive_link)|[link](https://drive.google.com/file/d/1-zmxggvpEi17NlXk4rwqswcguWHToUnx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1tGnNDX-1Z1AWRACQzloKAnW7EJrbkIgV/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ChOOEJCHugwlXSTZMtRPVyR7MOBsXiow/view?usp=drive_link)|\\n|\\\"a cd player\\\"|[link](https://drive.google.com/file/d/1fKhj-8aeGkVKjfb3FwtGUBkuf7FuKiM2/view?usp=sharing)|[link](https://drive.google.com/file/d/1VdXOQslsVru-x2x19tGYfKttT9rI7hit/view?usp=sharing)|[link](https://drive.google.com/file/d/1JE_LDcZm1-oBX4QJN-dIlmBCcNzJz2ne/view?usp=sharing)|[link](https://drive.google.com/file/d/1sRkv6iXBa7e9TVSMcV3hrlbZ8tZo0u_m/view?usp=sharing)|[link](https://drive.google.com/file/d/17k4XOBHya95ZsYwqRs_NONUff9Apw7r4/view?usp=sharing)|\\n|\\\"a desk chair\\\"|[link](https://drive.google.com/file/d/1mm_0cvW2RsvktsLcy07JXAeS5CGVfENO/view?usp=drive_link)|[link](https://drive.google.com/file/d/1yJQkmI6THlNn3hoL-mW2O7D5CI_A6gq9/view?usp=drive_link)|[link](https://drive.google.com/file/d/1pRvTW1eEjVNk7rycoJRAJ8IdvPqHEj46/view?usp=drive_link)|[link](https://drive.google.com/file/d/1Jpx9vtBIV7ymD7zRHLBIPSNAxSItMjFe/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ELVqtUhQbT5XmUcqko0h1pJjMGaicmAl/view?usp=drive_link)|\\n|\\\"a dumpster\\\"|[link](https://drive.google.com/file/d/1HKaoGX_dUN0-TQXlmIAuCbHygUFUV3bD/view?usp=drive_link)|[link](https://drive.google.com/file/d/1f1m09_mCKa9_4KZOGKme8BwsAc_7lkJG/view?usp=drive_link)|[link](https://drive.google.com/file/d/18UZWI-sGhUqXNuFW0SdnIFFO_QNVoiNI/view?usp=drive_link)|[link](https://drive.google.com/file/d/1zN_VombixmEsueJfd6L5kP7zws5fzq0G/view?usp=drive_link)|[link](https://drive.google.com/file/d/1jtiXfphExSoW4wGiDGUBRTD1t4nGVdKi/view?usp=drive_link)|\\n|\\\"a hammer\\\"|[link](https://drive.google.com/file/d/1RSq-vzcPfclttJToHObHypn7hs17VLEd/view?usp=sharing)|[link](https://drive.google.com/file/d/1Kw7aEhSdx3mCXdYkGKRzrGeobqaLJwAW/view?usp=sharing)|[link](https://drive.google.com/file/d/1aexhUd7Nl0Aey3Vg4I-XnoAdlzb3UZPJ/view?usp=drive_link)|[link](https://drive.google.com/file/d/1kxIuyKW3ROjT25KaohiRxpwVAwvQaVWC/view?usp=sharing)|[link](https://drive.google.com/file/d/1GqPePJEubcHmS5LOnMir5OmVIsHv8iTL/view?usp=drive_link)|\\n|\\\"a minivan\\\"|[link](https://drive.google.com/file/d/1yEG9Ni4rrNaZ4kCvL4WRgrY5nU2CxfZS/view?usp=drive_link)|[link](https://drive.google.com/file/d/11nAvcQqOKF-fCaP5-ZAHBNTzD27Fujz2/view?usp=drive_link)|[link](https://drive.google.com/file/d/1sUQbNuGWEw9S85RozmnzII0ByIneFv3W/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ku15fjR4ts_2KJm0Rl6OqGi5h9mP3hqi/view?usp=drive_link)|[link](https://drive.google.com/file/d/1EJcNOro3_wlw6spZ7TYFwtWfZ1Vx3dbL/view?usp=drive_link)|\\n|\\\"a money\\\"|[front](https://drive.google.com/file/d/10rPPgZsWc0iotldevklujifOZMbiz-0X/view?usp=drive_link), [back](https://drive.google.com/file/d/1PaRrWMzpS073RU7bg8NKvnB2wVvnF1ZX/view?usp=drive_link)|[front](https://drive.google.com/file/d/1uNwXkh7O4UASFmm_BhDJJfX8MYWj_My6/view?usp=drive_link), [back](https://drive.google.com/file/d/10x-gy15J7OgJdymqFsalM_R-aF-XCJi4/view?usp=drive_link)|[front](https://drive.google.com/file/d/1rp4m1AdduMMOUyXMHUng5eq8jmQBsF1e/view?usp=drive_link), [back](https://drive.google.com/file/d/1mRWQlFv7YF4SeOrPSq4GrXkro6nveYzC/view?usp=drive_link)|[front](https://drive.google.com/file/d/1EFDwk-csQEiokhU09NiubX-yGC4NNXr7/view?usp=drive_link), [back](https://drive.google.com/file/d/1I_uEbvFRMn9HdfhjgDSNf7miW_fgTLQf/view?usp=drive_link)|[front](https://drive.google.com/file/d/1SRVIlpRovqU2o2d1XYQMkdHXsgd_TQxC/view?usp=drive_link), [back](https://drive.google.com/file/d/1Oc45Uj5M5ySbKikOYqvl6jA0KX9sEB6O/view?usp=drive_link)|\\n|\\\"a sushi\\\"|[link](https://drive.google.com/file/d/15QWzVwVvJ-r7rFqUcVXGhnmNGRMdUClx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1d7hxtTi8NFio4kAXeMJjIGyrIaOwmfmA/view?usp=drive_link)|[link](https://drive.google.com/file/d/1uZerJAnPSwV-bAw0e3JXwBpaPsL96XzK/view?usp=drive_link)|[link](https://drive.google.com/file/d/1VMeuBZoB-CuDDc0ekSCb5ezrXQLeVNgr/view?usp=drive_link)|[link](https://drive.google.com/file/d/1gSCl1HYbhxIK4ovEEs0OJVF1CgqIgHLg/view?usp=drive_link)|\"}",
"{\"comment\": \"Thanks for providing more explanations about the metrics. It looks like LPIPS has been there for a while in this line of work but nobody finds its flaws as a metric in this task.\\n\\nI'll increase my rating. Please incorporate the discussions in the revision. \\n\\nCheers.\"}",
"{\"comment\": \"#### 1.Objaverse Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"a basketball\\\"|[link](https://drive.google.com/file/d/1PUhh1CFVGfzvHqeNS7-VWlspWygV7M9i/view?usp=drive_link)|[link](https://drive.google.com/file/d/10-Vjeq2lBFIrApNeJ409yoJL3tbdL7M5/view?usp=drive_link)|[link](https://drive.google.com/file/d/1-zmxggvpEi17NlXk4rwqswcguWHToUnx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1tGnNDX-1Z1AWRACQzloKAnW7EJrbkIgV/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ChOOEJCHugwlXSTZMtRPVyR7MOBsXiow/view?usp=drive_link)|\\n|\\\"a cd player\\\"|[link](https://drive.google.com/file/d/1fKhj-8aeGkVKjfb3FwtGUBkuf7FuKiM2/view?usp=sharing)|[link](https://drive.google.com/file/d/1VdXOQslsVru-x2x19tGYfKttT9rI7hit/view?usp=sharing)|[link](https://drive.google.com/file/d/1JE_LDcZm1-oBX4QJN-dIlmBCcNzJz2ne/view?usp=sharing)|[link](https://drive.google.com/file/d/1sRkv6iXBa7e9TVSMcV3hrlbZ8tZo0u_m/view?usp=sharing)|[link](https://drive.google.com/file/d/17k4XOBHya95ZsYwqRs_NONUff9Apw7r4/view?usp=sharing)|\\n|\\\"a desk chair\\\"|[link](https://drive.google.com/file/d/1mm_0cvW2RsvktsLcy07JXAeS5CGVfENO/view?usp=drive_link)|[link](https://drive.google.com/file/d/1yJQkmI6THlNn3hoL-mW2O7D5CI_A6gq9/view?usp=drive_link)|[link](https://drive.google.com/file/d/1pRvTW1eEjVNk7rycoJRAJ8IdvPqHEj46/view?usp=drive_link)|[link](https://drive.google.com/file/d/1Jpx9vtBIV7ymD7zRHLBIPSNAxSItMjFe/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ELVqtUhQbT5XmUcqko0h1pJjMGaicmAl/view?usp=drive_link)|\\n|\\\"a dumpster\\\"|[link](https://drive.google.com/file/d/1HKaoGX_dUN0-TQXlmIAuCbHygUFUV3bD/view?usp=drive_link)|[link](https://drive.google.com/file/d/1f1m09_mCKa9_4KZOGKme8BwsAc_7lkJG/view?usp=drive_link)|[link](https://drive.google.com/file/d/18UZWI-sGhUqXNuFW0SdnIFFO_QNVoiNI/view?usp=drive_link)|[link](https://drive.google.com/file/d/1zN_VombixmEsueJfd6L5kP7zws5fzq0G/view?usp=drive_link)|[link](https://drive.google.com/file/d/1jtiXfphExSoW4wGiDGUBRTD1t4nGVdKi/view?usp=drive_link)|\\n|\\\"a hammer\\\"|[link](https://drive.google.com/file/d/1RSq-vzcPfclttJToHObHypn7hs17VLEd/view?usp=sharing)|[link](https://drive.google.com/file/d/1Kw7aEhSdx3mCXdYkGKRzrGeobqaLJwAW/view?usp=sharing)|[link](https://drive.google.com/file/d/1aexhUd7Nl0Aey3Vg4I-XnoAdlzb3UZPJ/view?usp=drive_link)|[link](https://drive.google.com/file/d/1kxIuyKW3ROjT25KaohiRxpwVAwvQaVWC/view?usp=sharing)|[link](https://drive.google.com/file/d/1GqPePJEubcHmS5LOnMir5OmVIsHv8iTL/view?usp=drive_link)|\\n|\\\"a minivan\\\"|[link](https://drive.google.com/file/d/1yEG9Ni4rrNaZ4kCvL4WRgrY5nU2CxfZS/view?usp=drive_link)|[link](https://drive.google.com/file/d/11nAvcQqOKF-fCaP5-ZAHBNTzD27Fujz2/view?usp=drive_link)|[link](https://drive.google.com/file/d/1sUQbNuGWEw9S85RozmnzII0ByIneFv3W/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ku15fjR4ts_2KJm0Rl6OqGi5h9mP3hqi/view?usp=drive_link)|[link](https://drive.google.com/file/d/1EJcNOro3_wlw6spZ7TYFwtWfZ1Vx3dbL/view?usp=drive_link)|\\n|\\\"a money\\\"|[front](https://drive.google.com/file/d/10rPPgZsWc0iotldevklujifOZMbiz-0X/view?usp=drive_link), [back](https://drive.google.com/file/d/1PaRrWMzpS073RU7bg8NKvnB2wVvnF1ZX/view?usp=drive_link)|[front](https://drive.google.com/file/d/1uNwXkh7O4UASFmm_BhDJJfX8MYWj_My6/view?usp=drive_link), [back](https://drive.google.com/file/d/10x-gy15J7OgJdymqFsalM_R-aF-XCJi4/view?usp=drive_link)|[front](https://drive.google.com/file/d/1rp4m1AdduMMOUyXMHUng5eq8jmQBsF1e/view?usp=drive_link), [back](https://drive.google.com/file/d/1mRWQlFv7YF4SeOrPSq4GrXkro6nveYzC/view?usp=drive_link)|[front](https://drive.google.com/file/d/1EFDwk-csQEiokhU09NiubX-yGC4NNXr7/view?usp=drive_link), [back](https://drive.google.com/file/d/1I_uEbvFRMn9HdfhjgDSNf7miW_fgTLQf/view?usp=drive_link)|[front](https://drive.google.com/file/d/1SRVIlpRovqU2o2d1XYQMkdHXsgd_TQxC/view?usp=drive_link), [back](https://drive.google.com/file/d/1Oc45Uj5M5ySbKikOYqvl6jA0KX9sEB6O/view?usp=drive_link)|\\n|\\\"a sushi\\\"|[link](https://drive.google.com/file/d/15QWzVwVvJ-r7rFqUcVXGhnmNGRMdUClx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1d7hxtTi8NFio4kAXeMJjIGyrIaOwmfmA/view?usp=drive_link)|[link](https://drive.google.com/file/d/1uZerJAnPSwV-bAw0e3JXwBpaPsL96XzK/view?usp=drive_link)|[link](https://drive.google.com/file/d/1VMeuBZoB-CuDDc0ekSCb5ezrXQLeVNgr/view?usp=drive_link)|[link](https://drive.google.com/file/d/1gSCl1HYbhxIK4ovEEs0OJVF1CgqIgHLg/view?usp=drive_link)|\"}",
"{\"comment\": \"> Q8. The attention feature injection as in [Text-Guided Texturing by Synchronized Multi-View Diffusion] can help to reduce the problem of the autoregressive inpainting. Have you tried this?\\n\\nIf you\\u2019re referring to the Self-Attention Reuse part proposed in Text-Guided Texturing by Synchronized Multi-View Diffusion [9], unfortunately, I hadn\\u2019t considered this methodology before submitting the paper and therefore have not attempted it. However, after reading the paper, I believe that utilizing the self-attention blocks of ControlNet\\u2019s encoder during the training process could be a novel and promising approach. Currently, due to a lack of time and resources, we are unable to reproduce all the results, but I will definitely consider trying it as part of my future work. Thank you for recommending such an excellent paper.\\n\\n> Q9. Any 3D results of the method? I prefer to see the rendered 360-degree videos of results.\\n\\nIn addition to the four objects from the Objaverse dataset presented in the qualitative results of the paper, we will showcase qualitative comparisons for four additional objects from the same dataset in GIF format. Additionally, we will provide qualitative comparison results for clothed human meshes from RenderPeople[10], also presented in GIF format.\\n\\nThe text prompts for RenderPeople's clothed human meshes were created based on the ground truth images provided by RenderPeople, which were designed by professional designers.\"}",
"{\"comment\": \">Q4. Is LPIPS (Section 4.1, Evaluation metrics) a good metric to evaluate view consistency, as LPIPS is sensitive to spatial information? Given that the view angles are known, would it make more sense to reproject one of the views to another and then compute LPIPS between the projected view and the other one?\\n\\nI feel that I might not fully understand this part, so I would like to kindly ask for a more detailed explanation if possible. Assuming the object is fixed and the camera rotates around it to render the object from different viewpoints, what specific spatial information in the rendered images changes as the viewpoint shifts? Additionally, could you explain precisely what is meant by \\\"reproject one of the views to another\\\"? Thank you very much for your valuable time and assistance.\\n\\n>Q5. What does the performance preservation loss do in Eqn. (10)? Why would it be effective at a high level?\\n\\nThe reason for introducing Eq. (10) is that, when training ControlNet with a limited number of images (in this case, rendered images from viewpoints close to the first viewpoint), the model is at high risk of forgetting the information it had previously learned. This results in a phenomenon known as catastrophic forgetting, which can significantly impair ControlNet's performance. To address this issue, Eq. (10) ensures that the parameters of ControlNet remain close to the values learned during its original training on a large dataset.\\n\\n>Q6. Writing and figure problems\\n\\nThank you for your thorough review of our work and for highlighting areas in our writing and figures that could be improved. We sincerely apologize if these issues caused any difficulty in understanding or disrupted your reading experience. We deeply value your feedback and will carefully review the manuscript, including all figures, to address potential issues and ensure clarity and precision in the final version after incorporating feedback from all reviewers.\\n\\n>Q7. The authors do not show any viewpoint-varying results in video format, making it less convincing that TexTailor achieves a good view consistency.\\n\\nIn addition to the four objects from the Objaverse dataset presented in the qualitative results of the paper, we will showcase qualitative comparisons for four additional objects from the same dataset in GIF format. Additionally, we will provide qualitative comparison results for clothed human meshes from RenderPeople[6], also presented in GIF format.\\n\\nThe text prompts for RenderPeople's clothed human meshes were created based on the ground truth images provided by RenderPeople, which were designed by professional designers.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"I thank the authors for an oustandingly thorough rebuttal to my and other reviewers' concerns. While I remain unconvinced about the scope of the contributions and the lack of a user study, many of my original concerns have been adressed by the authors and I will therefore increase my rating accordingly.\"}",
"{\"comment\": \"Additionally, I would like to take this opportunity to clarify the contributions of my paper, as I believe they may not have been explicitly highlighted in the manuscript.\", \"i_believe_the_contributions_of_this_paper_are_as_follows\": \"1. We extend the resampling technique, previously used only in 2D image inpainting, to the field of 3D texture synthesis by applying it to the DDIM non-Markovian process.\\n2. Without relying on an external dataset of 3D meshes, textures, or text, we propose a novel approach that fine-tunes the model using only a few images that accurately represent the texture of a specific object within the distribution learned by the existing depth-aware T2I model. Even with just a few images inferred by the original model, this approach effectively compensates for textures that are difficult to generate at certain angles and demonstrates the ability to maintain texture consistency across various viewpoints\\n3. To address the catastrophic forgetting phenomenon\\u2014where a depth-aware T2I model forgets information originally learned from a large-scale dataset when fine-tuned on a smaller one\\u2014we propose the performance preservation loss to mitigate this issue and maintain the model's performance.\\n4. To eliminate the need for manually configuring optimal camera positions based on object geometry\\u2014a process requiring significant time and effort\\u2014we introduce an adaptive method that adjusts camera positions dynamically based on the extent of texture coverage at each viewpoint.\\n\\nwe believe that the other contributions, particularly the ability to generate consistent images across various angles, can be highly applicable not only to the texture synthesis field but also to other 3D domains requiring such consistency. This makes our work a valuable contribution to the ICLR community and a potential foundation for future research.\"}",
"{\"comment\": \"Thank you for the detailed response to my review and the concerns raised by other reviewers. While I still find the contribution somewhat limited, most of my initial concerns have been addressed, and I will adjust my rating upward accordingly.\"}",
"{\"comment\": \"Thank you for taking the time to review our work. We greatly appreciate your recognition of the key aspects of our proposed methods and your positive feedback on the experiments. We will revise the manuscript to incorporate your suggestions. If any part of our response is unclear, please do not hesitate to reach out for further clarification\", \"detailed_response_to_comments\": \"> Q1. In Line 93, it is not clear to me why finetuning a depth-aware T2I model matters. Maybe including a brief explanation could be helpful.\\n\\nAs shown in Fig. 1(b), existing methods sequentially transition from the initial viewpoint to subsequent viewpoints, generating and merging textures using a depth-aware T2I model. However, this approach has two major issues: (1) the textures synthesized at the initial viewpoint become invisible as the viewpoint transitions, and (2) even when using an image inpainting strategy, the generated textures can vary significantly depending on the viewing angle. For example, in the pencil case example of Fig. 1(b) under Text2Tex, the blue texture generated from the second viewpoint may appear natural from that specific angle. However, when observed from the angle shown in the third row, only the blue texture region (with the textures synthesized at the first viewpoint no longer visible) is conditioned into the depth-aware T2I model, leading to the generation of textures that deviate from the original texture properties.\\n\\nTo address these issues, this paper assumes that the inability of depth-aware T2I models to maintain texture properties from the initial viewpoint when transitioning across viewpoints is the root cause of the problem. To mitigate this, we additionally train the depth-aware T2I model using a small set of texture images synthesized from viewpoints close to the first viewpoint (which effectively preserve the texture properties of the first viewpoint). This ensures that the depth-aware T2I model is better fitted to those texture properties, enabling the generation of textures with similar properties even as the viewpoint transitions.\\n\\n>Q2. In Section 3.1, the authors propose a non-Markov process to reduce the sampling steps. However, the benefits of it is confusing to me. Would it involve a faster sampling speed? If it would, there is not result to support it. On the other hand, the authors mainly show the effects of resampling is to \\\"preserve the texture properties\\\" (Line 480). This makes me confused about the motivation of newly proposed resampling trick.\\n\\nAs you mentioned, the resampling trick plays a key role in improving texture consistency between two adjacent viewpoints. This resampling trick is based on the Lumayr et el.[8] from the 2D image inpainting field, where the authors recommend 250 timesteps for applying the resampling trick.\\n\\nHowever, such a high number of timesteps makes it challenging to apply the resampling trick to the texture synthesis domain. Specifically, when performing $R$-step resampling per timestep, the total number of timesteps required in 2D image synthesis is $250 \\\\times R$. In contrast, for 3D texture synthesis, the process must account for the number of viewpoints, leading to $250 \\\\times R \\\\times $(Number of viewpoints) timesteps in total. This results in an exceedingly high computational cost for generating the entire texture.\\n\\nTo address this issue, we replaced the DDPM-based sampling[2] method used in Repaint with a DDIM-based sampling [3] method, which allowed us to reduce the number of required timesteps per viewpoint from 250 to 30 while maintaining high quality. We believe the quality preservation is demonstrated in our ablation study, which shows consistent results with and without the resampling trick.\\n\\n>Q3. It does not make sense to me the authors choose to not compare with text-driven methods (Line 373-374) just because they have \\\"difficulties\\\" when optimizing textures for \\\"cars\\\". Wouldn't it be a good chance to showcase the superiority of TexTailor? What are the difficulties for the text-driven methods mentioned in Lines 373 and 374? (Weakness3 & Question1)\\n\\nThe \\\"difficulty\\\" mentioned in lines 373\\u2013374 of this paper refers to cases where the image generated by the depth-aware T2I model is not properly back-projected onto the corresponding part of the mesh.\\n\\n|Model|Generated Output|Projecting Output|\\n|:----:|:----:|:----:|\\n|TEXTure|[link](https://drive.google.com/file/d/1INKUqQvy-zIiO9lraq0Tm-XEkPWogfej/view?usp=drive_link)|[link](https://drive.google.com/file/d/1osU9Ss9M60D2-TNjW1fRIaTVyq8bKhPk/view?usp=drive_link)|\\n\\nWhile, as you pointed out, this could be argued as an advantage of xatlas-based texture mapping methods used in Text2Tex and TexTailor (our method), the primary focus of this paper is not to conduct a detailed comparison between xatlas[4] and differential mesh rendering (NVDiffRast[5]) or propose a method based on such an analysis. Therefore, to maintain consistency in the manuscript, we opted to briefly mention this issue and move on.\"}",
"{\"comment\": \"#### 1.Objaverse Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"a basketball\\\"|[link](https://drive.google.com/file/d/1PUhh1CFVGfzvHqeNS7-VWlspWygV7M9i/view?usp=drive_link)|[link](https://drive.google.com/file/d/10-Vjeq2lBFIrApNeJ409yoJL3tbdL7M5/view?usp=drive_link)|[link](https://drive.google.com/file/d/1-zmxggvpEi17NlXk4rwqswcguWHToUnx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1tGnNDX-1Z1AWRACQzloKAnW7EJrbkIgV/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ChOOEJCHugwlXSTZMtRPVyR7MOBsXiow/view?usp=drive_link)|\\n|\\\"a cd player\\\"|[link](https://drive.google.com/file/d/1fKhj-8aeGkVKjfb3FwtGUBkuf7FuKiM2/view?usp=sharing)|[link](https://drive.google.com/file/d/1VdXOQslsVru-x2x19tGYfKttT9rI7hit/view?usp=sharing)|[link](https://drive.google.com/file/d/1JE_LDcZm1-oBX4QJN-dIlmBCcNzJz2ne/view?usp=sharing)|[link](https://drive.google.com/file/d/1sRkv6iXBa7e9TVSMcV3hrlbZ8tZo0u_m/view?usp=sharing)|[link](https://drive.google.com/file/d/17k4XOBHya95ZsYwqRs_NONUff9Apw7r4/view?usp=sharing)|\\n|\\\"a desk chair\\\"|[link](https://drive.google.com/file/d/1mm_0cvW2RsvktsLcy07JXAeS5CGVfENO/view?usp=drive_link)|[link](https://drive.google.com/file/d/1yJQkmI6THlNn3hoL-mW2O7D5CI_A6gq9/view?usp=drive_link)|[link](https://drive.google.com/file/d/1pRvTW1eEjVNk7rycoJRAJ8IdvPqHEj46/view?usp=drive_link)|[link](https://drive.google.com/file/d/1Jpx9vtBIV7ymD7zRHLBIPSNAxSItMjFe/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ELVqtUhQbT5XmUcqko0h1pJjMGaicmAl/view?usp=drive_link)|\\n|\\\"a dumpster\\\"|[link](https://drive.google.com/file/d/1HKaoGX_dUN0-TQXlmIAuCbHygUFUV3bD/view?usp=drive_link)|[link](https://drive.google.com/file/d/1f1m09_mCKa9_4KZOGKme8BwsAc_7lkJG/view?usp=drive_link)|[link](https://drive.google.com/file/d/18UZWI-sGhUqXNuFW0SdnIFFO_QNVoiNI/view?usp=drive_link)|[link](https://drive.google.com/file/d/1zN_VombixmEsueJfd6L5kP7zws5fzq0G/view?usp=drive_link)|[link](https://drive.google.com/file/d/1jtiXfphExSoW4wGiDGUBRTD1t4nGVdKi/view?usp=drive_link)|\\n|\\\"a hammer\\\"|[link](https://drive.google.com/file/d/1RSq-vzcPfclttJToHObHypn7hs17VLEd/view?usp=sharing)|[link](https://drive.google.com/file/d/1Kw7aEhSdx3mCXdYkGKRzrGeobqaLJwAW/view?usp=sharing)|[link](https://drive.google.com/file/d/1aexhUd7Nl0Aey3Vg4I-XnoAdlzb3UZPJ/view?usp=drive_link)|[link](https://drive.google.com/file/d/1kxIuyKW3ROjT25KaohiRxpwVAwvQaVWC/view?usp=sharing)|[link](https://drive.google.com/file/d/1GqPePJEubcHmS5LOnMir5OmVIsHv8iTL/view?usp=drive_link)|\\n|\\\"a minivan\\\"|[link](https://drive.google.com/file/d/1yEG9Ni4rrNaZ4kCvL4WRgrY5nU2CxfZS/view?usp=drive_link)|[link](https://drive.google.com/file/d/11nAvcQqOKF-fCaP5-ZAHBNTzD27Fujz2/view?usp=drive_link)|[link](https://drive.google.com/file/d/1sUQbNuGWEw9S85RozmnzII0ByIneFv3W/view?usp=drive_link)|[link](https://drive.google.com/file/d/1ku15fjR4ts_2KJm0Rl6OqGi5h9mP3hqi/view?usp=drive_link)|[link](https://drive.google.com/file/d/1EJcNOro3_wlw6spZ7TYFwtWfZ1Vx3dbL/view?usp=drive_link)|\\n|\\\"a money\\\"|[front](https://drive.google.com/file/d/10rPPgZsWc0iotldevklujifOZMbiz-0X/view?usp=drive_link), [back](https://drive.google.com/file/d/1PaRrWMzpS073RU7bg8NKvnB2wVvnF1ZX/view?usp=drive_link)|[front](https://drive.google.com/file/d/1uNwXkh7O4UASFmm_BhDJJfX8MYWj_My6/view?usp=drive_link), [back](https://drive.google.com/file/d/10x-gy15J7OgJdymqFsalM_R-aF-XCJi4/view?usp=drive_link)|[front](https://drive.google.com/file/d/1rp4m1AdduMMOUyXMHUng5eq8jmQBsF1e/view?usp=drive_link), [back](https://drive.google.com/file/d/1mRWQlFv7YF4SeOrPSq4GrXkro6nveYzC/view?usp=drive_link)|[front](https://drive.google.com/file/d/1EFDwk-csQEiokhU09NiubX-yGC4NNXr7/view?usp=drive_link), [back](https://drive.google.com/file/d/1I_uEbvFRMn9HdfhjgDSNf7miW_fgTLQf/view?usp=drive_link)|[front](https://drive.google.com/file/d/1SRVIlpRovqU2o2d1XYQMkdHXsgd_TQxC/view?usp=drive_link), [back](https://drive.google.com/file/d/1Oc45Uj5M5ySbKikOYqvl6jA0KX9sEB6O/view?usp=drive_link)|\\n|\\\"a sushi\\\"|[link](https://drive.google.com/file/d/15QWzVwVvJ-r7rFqUcVXGhnmNGRMdUClx/view?usp=drive_link)|[link](https://drive.google.com/file/d/1d7hxtTi8NFio4kAXeMJjIGyrIaOwmfmA/view?usp=drive_link)|[link](https://drive.google.com/file/d/1uZerJAnPSwV-bAw0e3JXwBpaPsL96XzK/view?usp=drive_link)|[link](https://drive.google.com/file/d/1VMeuBZoB-CuDDc0ekSCb5ezrXQLeVNgr/view?usp=drive_link)|[link](https://drive.google.com/file/d/1gSCl1HYbhxIK4ovEEs0OJVF1CgqIgHLg/view?usp=drive_link)|\"}",
"{\"comment\": \"We deeply appreciate the time and effort each reviewer has devoted to providing thoughtful and constructive feedback. We have carefully addressed the reviewers\\u2019 comments and submitted a revised version of the paper that incorporates their valuable suggestions.\", \"to_summarize_the_changes_made\": \"(1) Improved the flow of the introductory section on page 1, line 46.\\n\\n(2) Replaced parts of Figure 1, Figure 6, and Figure 7 to address quality degradation caused by scaling.\\n\\n(3) Added quantitative results for the performance preservation loss ablation study in Table 2.\\n\\n(4) Removed statements claiming our results are better aligned with prompts in the qualitative comparisons section.\\n\\n(5) Expanded the contribution section for greater clarity.\\n\\n(6) Added citations for the methods compared in Figure 1 to the caption and clarified the meaning of the equation in Figure 3.\\n\\n(7) Included additional qualitative comparisons for Objaverse and RenderPeople datasets in the appendix.\\n\\n(8) Added qualitative results for non-diffuse objects in the appendix.\\n\\n(9) Provided ablation studies on various objects in the appendix.\\n\\n(10) Added a detailed discussion of limitations in the appendix.\\n\\n(11) Included zoomed-in comparisons between TexTailor and Text2Tex in the appendix.\\n\\n(12) Corrected minor typographical errors throughout the manuscript.\\n\\n(13) included GIF results in the supplementary material \\n\\nWe sincerely thank the reviewers for their invaluable insights and guidance, which have significantly improved the quality of our work.\"}",
"{\"comment\": \"#### 2.RenderPeople Dataset\\n|Text Prompt|Latent-Paint|Text2Tex|TEXTure|Paint-it|Ours|\\n|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n|\\\"A business woman wearing a white blouse with a ribbon detail, light beige pants, nude-tone heels, and neatly tied blonde hair\\\"|[claudia](https://drive.google.com/file/d/1cugJNiHxkIxzZmfapOkUSUt1b5nmX-I4/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1tuUSCLRtZs8meKal50_kkfX0XkTDIlwU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1HkO7HNKWkFh4u_xZfxRTgiyQV35ENyvS/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1cYYHgVoRn5YN-BQGbSXxtVa3ONWLBpIU/view?usp=drive_link)|[claudia](https://drive.google.com/file/d/1jpUILlxb39eSsAhZnk0kMBt2dTQ_iswl/view?usp=drive_link)|\\n|\\\"A man, wearing a white dress shirt, a black vest, black formal trousers, a black tie, a black belt, and dark formal shoes, with short neatly styled hair\\\"|[eric](https://drive.google.com/file/d/1zu8JFHMlV2oHCu84je9z-hJn1_YUACl0/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1ejcq1bB6djVUmmCqAxbqrDA2q_l_mOsK/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1kQ4JW-Nu14p8qiLvR4RwvB90Iu_elRoI/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1FItS8NhnffnzjoNPPU1or_t6ScWUveO2/view?usp=drive_link)|[eric](https://drive.google.com/file/d/1YibXGY7AicDrCaalaJUChKGRE5nQ6T7c/view?usp=drive_link)|\\n|\\\"A man, wearing a gray short-sleeve T-shirt, blue jeans, white sneakers, and short, dark brown hair styled neatly\\\"|[manuel](https://drive.google.com/file/d/1yRmwmkkZ8c2VmC3fr2B92R2ztWT6BC0a/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1jasYCugvfEaWtXMusmCFExluKTggHtKX/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1kXzUleXzPhJkehtuoNUPhcj9YJzqYSRm/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1ZYkWlhq2cxa-jyxaPeHbdvqTmjQE9-RP/view?usp=sharing)|[manuel](https://drive.google.com/file/d/1epeP2Ms2HC5AkW7uumw9f08H6dk-FVCc/view?usp=sharing)|\\n|\\\"A woman with medium-dark skin tone, wearing a black blazer, a black top, gray pants with a gray tied belt, black heels, and having neatly styled dark hair\\\"|[carla](https://drive.google.com/file/d/1SN1Hft8lAbLU4HHTxB7k6e_WxoG-h-r-/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1jJCNNpgXsB8xJKu6NnGkXicQ3ZDUOCRf/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1OZ-uWrrX06M19CDgpukyCMkM0A5A4l78/view?usp=drive_link)|[carla](https://drive.google.com/file/d/1vc3EaAvNO18t8bUo8l_-KC5hhGTOzDTD/view?usp=sharing)|[carla](https://drive.google.com/file/d/1TlZ2wcokdUOz9Q-KIsSIShCiheelB1ox/view?usp=sharing)|\\n\\n### Reference\\n[1] Lugmayr, Andreas, et al. \\\"Repaint: Inpainting using denoising diffusion probabilistic models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[2] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. \\\"Denoising diffusion probabilistic models.\\\" Advances in neural information processing systems 33 (2020): 6840-6851.\\n\\n[3] Song, Jiaming, Chenlin Meng, and Stefano Ermon. \\\"Denoising diffusion implicit models.\\\" arXiv preprint arXiv:2010.02502 (2020).\\n\\n[4] xatlas, https://github.com/jpcy/xatlas\\n\\n[5] Laine, Samuli, et al. \\\"Modular primitives for high-performance differentiable rendering.\\\" ACM Transactions on Graphics (ToG) 39.6 (2020): 1-14.\\n\\n[6] RenderPeople, https://renderpeople.com/free-3d-people/, 2023\"}"
]
} |
1NkrxqY4jK | Towards Understanding Safety Alignment: A Mechanistic Perspective from Safety Neurons | [
"Jianhui Chen",
"Xiaozhi Wang",
"Zijun Yao",
"Yushi Bai",
"Lei Hou",
"Juanzi Li"
] | Large language models (LLMs) excel in various capabilities but pose safety risks such as generating harmful content and misinformation, even after safety alignment. In this paper, we explore the inner mechanisms of safety alignment through the lens of mechanistic interpretability, focusing on identifying and analyzing *safety neurons* within LLMs that are responsible for safety behaviors. We propose *inference-time activation contrasting* to locate these neurons and *dynamic activation patching* to evaluate their causal effects on model safety. Experiments on multiple prevalent LLMs demonstrate that we can consistently identify about $5$% safety neurons, and by only patching their activations we can restore over $90$% of the safety performance across various red-teaming benchmarks without influencing general ability. The finding of safety neurons also helps explain the ''alignment tax'' phenomenon by revealing that the key neurons for model safety and helpfulness significantly overlap, yet they require different activation patterns for the same neurons. Furthermore, we demonstrate an application of our findings in safeguarding LLMs by detecting unsafe outputs before generation. | [
"Large Language Models",
"Mechanistic Interpretability",
"Safety Alignment",
"Neuron"
] | Reject | https://openreview.net/pdf?id=1NkrxqY4jK | https://openreview.net/forum?id=1NkrxqY4jK | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zXvvdMTr9O",
"x1qZEX5x9x",
"wM9H4e9UBw",
"wJfs7C9uVS",
"uzWrKtBGnL",
"oWJUDz6O0v",
"oDEmL6XyVY",
"o8Dss0jSVx",
"lFjwY9fbIx",
"jNBQbhcMP9",
"iFZYepTK8B",
"i2zyPEPx6v",
"fhjZMocfzo",
"bZUhaxYlhh",
"PRigFiJoo6",
"GB9BJbsruD",
"Dnv5ok1xOy"
],
"note_type": [
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1734530308984,
1730702392690,
1732531209808,
1732980898925,
1733210883339,
1732691353635,
1733166052064,
1730561570302,
1733165680662,
1732548791089,
1730588739952,
1732382506340,
1732381706796,
1732382866258,
1732383265933,
1737523935420,
1732382548697
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8830/Area_Chair_GErK"
],
[
"ICLR.cc/2025/Conference/Submission8830/Reviewer_9rqA"
],
[
"ICLR.cc/2025/Conference/Submission8830/Reviewer_HaLN"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8830/Reviewer_9rqA"
],
[
"ICLR.cc/2025/Conference/Submission8830/Reviewer_9rqA"
],
[
"ICLR.cc/2025/Conference/Submission8830/Reviewer_xkYp"
],
[
"ICLR.cc/2025/Conference/Submission8830/Reviewer_9rqA"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8830/Reviewer_HaLN"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8830/Authors"
]
],
"structured_content_str": [
"{\"metareview\": \"The recommendation is based on the reviewers' comments, the area chair's evaluation, and the author-reviewer discussion.\\n\\nWhile the reviewers see some merits in using a mechanistic interpretability approach to study safety neurons in LLMs, this submission should not be accepted in its current form due to several fundamental issues, as pointed out by the reviewers, including\\n\\n- Distinction and novelty in comparison to existing works, especially \\\"Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, & Peter Henderson. (2024). Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications.\\\"\\n- Soundness of the methodology, especially the comments made by Reviewer 9rqA\\n\\nDuring the final discussion phase, reviewers suggest to reject this submission and no reviewer is willing to champion this paper in its current form. I also believe the presentation and position of the paper can be improved and demand another round of full reviews. I hope the reviewers\\u2019 comments can help the authors prepare a better version of this submission.\", \"additional_comments_on_reviewer_discussion\": \"This submission should not be accepted in its current form due to several fundamental issues, as pointed out by the reviewers, including\\n\\n- Distinction and novelty in comparison to existing works, especially \\\"Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, & Peter Henderson. (2024). Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications.\\\"\\n- Soundness of the methodology, especially the comments made by Reviewer 9rqA\\n\\nDuring the final discussion phase, reviewers suggest to reject this submission and no reviewer is willing to champion this paper in its current form. I also believe the presentation and position of the paper can be improved and demand another round of full reviews.\"}",
"{\"summary\": \"This paper introduces a novel methodology for identifying specific MLP neurons that contribute to safety alignment in large language models. The authors present two complementary techniques: inference-time activation contrasting, which identifies neurons by comparing their activation patterns between pre- and post-safety-finetuned model checkpoints; and dynamic activation patching, which employs causal interventions to quantify the extent to which the identified neurons are responsible for the model's safety behaviors.\\n\\nThe authors show that inference-time activation contrasting can robustly identify neurons that are causally responsible for safety behavior (as measured by dynamic activation patching), on a wide range of benchmarks.\\n\\nThrough extensive experimentation, the authors demonstrate several key findings. When safety neurons are patched into instruction-trained models that were finetuned for helpfulness, it increases safety but reduces helpfulness. The reverse effect is also observed, suggesting that safety and helpfulness behaviors rely on similar neural mechanisms - providing mechanistic evidence for the alignment tax hypothesis. Additionally, the identified safety neurons can be used for harmful prompt classification to prevent unsafe model outputs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors tested their method on a variety of model families (LLaMa2, Mistral, and Gemma), and used a variety of different datasets and cost models to evaluate safety. This helps increase confidence that the neurons are actually responsible for general safety behavior, and not just patterns present in a particular dataset/grading scheme.\", \"The authors show that the projections of their safety neurons onto the unembedding of the model, result in different tokens than toxicity neurons identified in previous work [1]. This distinction highlights that more complex instruction-tuned models have more nuanced mechanisms for dealing with safety than simply downweighting neurons that respond with toxic content.\", \"[1] Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, & Rada Mihalcea. (2024). A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.\"], \"weaknesses\": \"- The primary contribution of this work lacks sufficient novelty in the context of existing research. Prior work has already demonstrated successful localization of safety-relevant components in language models across multiple architectural levels, including neurons [1], parameters [2], residual activations [3], attention heads [4] [5], and layers [6]. While the authors occasionally reference some of these works throughout the paper, they fail to provide a comprehensive discussion of this existing research in either the related work section or the discussion.\\n\\n\\n- The authors fail to adequately justify their focus on MLP neurons as the optimal level of abstraction for localizing safety behavior in language models. While they concentrate exclusively on neurons, prior work has demonstrated that safety behaviors emerge across multiple architectural components, particularly in attention heads and residual stream activations. The decision to analyze only neurons, while excluding these other important components, requires stronger theoretical or empirical justification. This limitation is particularly notable given that existing research has specifically identified attention heads as crucial contributors to refusal behavior [4].\\n\\n- The paper\\u2019s main contribution beyond identifying safety neurons is showing that helpfulness and safety training utilize similar mechanisms, which accounts for the \\u201calignment tax\\u201d seen during safety training. However, the evidence provided in favor of this hypothesis is limited. The evidence can also be explained by dynamic activation patching not being a very good way of transferring specific mechanisms between different checkpoints. The authors should also look at models finetuned on both helpful and harmful data at the same time (HHH trained model), and test whether safety and helpful neurons still conflict.\\n\\n- The classification results in Section 6 are very misleading. The authors suggest that safety neurons show promise in assisting with harmfulness classification. However, the results in Appendix E suggest that safety neurons aren\\u2019t that much more useful for classifying harmfulness compared to random neurons (with random neurons being better when using 1500 neurons). This suggests that the method does not actually localize safety neurons, or that localization is not very useful for probing for harmfulness. Also, if the authors are going to claim that safety neurons are useful for building defenses that improve safety, they should compare it against similar setups such as in [3].\\n\\n[1] Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, & Rada Mihalcea. (2024). A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity. \\n\\n[2] Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, & Peter Henderson. (2024). Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications.\\n\\n[3] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, & Dan Hendrycks. (2023). Representation Engineering: A Top-Down Approach to AI Transparency.\\n\\n[4] Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, & Neel Nanda. (2024). Refusal in Language Models Is Mediated by a Single Direction.\\n\\n[5] Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Kun Wang, Yang Liu, Junfeng Fang, & Yongbin Li. (2024). On the Role of Attention Heads in Large Language Model Safety.\\n\\n[6] Shen Li, Liuyi Yao, Lan Zhang, & Yaliang Li. (2024). Safety Layers in Aligned Large Language Models: The Key to LLM Security.\", \"questions\": [\"What motivated your decision to focus exclusively on MLP neurons, given that prior work has shown attention heads are crucial for refusal and safety behavior?\", \"Have you considered validating your hypothesis about helpfulness and safety mechanism overlap using models simultaneously trained on both helpful and harmful data?\", \"Are the probing results, primarily a negative result? If so, the section should be edited to clarify that.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"About Answer to W1 & Q1:\\nThank you for the explanation of the methods and comparisons with [6] and [7]. However, there are areas where further clarification would greatly enhance understanding:\\n\\ufeff\\n* Clarification of \\\"Properties\\\":\\nAt the beginning of Section 4, you refer to properties like sparsity, causal effect, transferability, and stability during training. Could you elaborate on what these \\\"properties\\\" specifically mean in the context of your work? How are they quantitatively or qualitatively defined, and how are they measured?\\n* Controlling Properties:\\nWhen you state that [7] \\\"cannot control the specific properties of the neurons identified,\\\" could you clarify what it means to \\\"control\\\" a property, particularly for transferability? How does your approach enable control (or lack thereof) of such properties, and why is this ability to control properties important for identifying safety neurons?\\n* Comparison with [6]:\\nIn your comparison with [6], you mention that their method starts from desired neuron properties and identifies neurons mechanistically, but suffers from reproducibility issues due to predefined assumptions. You also state that for safety neurons, \\\"we lack prior knowledge about the properties they should exhibit, making such an approach unsuitable.\\\" This could use more elaboration:\\nCould you explain why the lack of prior knowledge about safety neurons' properties makes the approach of [6] unsuitable?\\nDoes your method completely avoid making assumptions about neuron properties, or do you rely on certain implicit assumptions? If so, how do these differ from those of [6]?\"}",
"{\"title\": \"Response by the authors to Reviewer 9rqA\", \"comment\": \"Thanks for the response and experiment suggestions. We provide further explanations about our novelty and contribution, and we also add new experiments as suggested.\\n\\n## Regarding W1&W2\\n\\nWe agree that only providing a new localization at a different granularity is limited and it is important for an interpretability work to provide new insights. This is why we provide the **interpretation of alignment tax with safety neurons**. Also, to demonstrate the potential utility, we included the safeguard experiments. We understand your concerns about these two parts (W3 and W4) and have added more experiments as suggested.\\n\\nTo summarize, our contributions are two-fold:\\n\\n1. **New techniques for Localizing Model Components:** Our framework (inference-time activation contrasting and dynamic activation patching) identifies model components (not limited to neurons) that have a causal effect on specific behaviors (not limited to safety), even in the absence of ground-truth labels. This expands the scope for investigating various behaviors at different granularities.\\n\\n2. **New insights from the safety neuron interpretation:** The localization of safety neurons in this paper enables new insights about LLMs\\u2019 inner workings. For example, we are the first, to our best knowledge, to propose a mechanistic explanation for the alignment tax phenomenon, and we believe more insights about model safety can be revealed by more careful studies on the properties of safety neurons.\\n\\n## Regarding W3\\n\\nThank you for raising this important point and for the valuable suggestion. We acknowledge that our paper suggests point 2 but does not directly prove point 1. To address this gap, we conducted an additional experiment to verify point 1:\\n\\n1. We used DPO to train two models based on the same SFT model: one trained on HH-helpful (denoted as *Helpful DPO*), and the other one trained on both HH-harmless and HH-helpful (denoted as *HH DPO*).\\n\\n2. We patched 5% of neuron activations from *HHDPO* into *Helpful DPO*, while the neurons are identified from *Helpful DPO* for model helpfulness under the same pipeline of identifying safety neurons in the paper.\\n\\n| | **BT** | **RT** | **HB** | **JL** |\\n|------------------------|----------|----------|----------|----------|\\n| Helpful DPO | 3.42 | 0.65 | 6.68 | 6.66 |\\n| Helpful DPO (patched) | -11.77 | -11.09 | -5.57 | -8.28 |\\n| HH DPO | -11.81 | -12.42 | -10.41 | -11.76 |\\n\\n\\n\\nThe results indicate that **the neurons identified as helpful neurons are also crucial for improving model safety during HH training**, which is exactly the case for your hypothesis. We will add these important results to the paper in the next permitted revision. If you have any concerns or further questions, we are more than happy to continue the discussion.\\n\\n## Regarding W4\\n\\nThank you for your suggestion. As the submission deadline has passed, we are unable to update the PDF with figures to showcase our results. Below are the details of our experimental setup and corresponding results:\\n\\nWe introduced a baseline using the residual stream and reported best-performing and average results across all layers. A partial summary of the results is presented in the table below (\\u00b1 is the standard error across random experiments):\\n\\n\\n| | **150** | **1500** | **3000** |\\n|------------------------|----------------------|----------------------|----------------------|\\n| Safety Neurons | 71.08 | 76.24 | 76.89 |\\n| RN-Same Distribution | 68.30\\u00b11.02 | 74.80\\u00b11.31 | 76.35\\u00b10.67 |\\n| RN-Last | 67.66\\u00b10.53 | 74.21\\u00b11.19 | 74.91\\u00b10.29 |\\n| RN-All | 67.05\\u00b11.15 | 72.38\\u00b10.54 | 74.34\\u00b10.50 |\\n| Residual | 77.80 (layer 15) | 71.75 (average) | |\\n\\n\\n\\nThe effect of safety neurons is on par with the best performance of the residual stream. Considering that the neuron-level interpretation has the unique advantage of providing mechanistic interpretations (showcased in the interpreting alignment tax part) over the representation interpretations like using the residual stream, we believe this result is satisfactory.\"}",
"{\"comment\": \"We agree that patching over the residual activations is possible, but this paper is not about discussing which one of the neuron and representation interpretations is better. This paper has always been about interpreting model safety from the very beginning, and we are just saying that in this topic, our neuron-level interpretation demonstrates an advantage in understanding the mechanism (of alignment tax) over the representation-engineering related works. To avoid misunderstanding again, we are not saying that representation engineering has no way to gain a better mechanistic interpretation of model safety, but this hasn't been achieved (or demonstrated by a clear case like the alignment tax interpretation in our paper), and thus we believe our findings have unique values for now. We understand that one can have the favor of a technical direction over another, but we do not think the possibility of another direction should influence the value of actual findings.\\n\\nAlso, thanks for acknowledging our new experiments. Although in different positions, we respect your opinion.\"}",
"{\"comment\": \"### Regarding W1 & W2\\n I acknowledge the engineering challenges involved in interpreting neurons compared to more coarse-grained units like layers and attention heads. The authors correctly note that previous works have examined safety behavior at both higher granularity (parameters) and lower granularity (attention heads/layers). However, my concern about novelty remains: the central question is not whether one can study safety behaviors at different granularities, but whether doing so provides meaningful new insights.\\n\\nEven if the proposed probing method were significantly better on safety neurons versus random neurons, I'd still be unsure what the actionable insight would be. Researchers building latent space defenses (in the RepE camp) aren't probing at random neurons right now, so it feels like an artificial comparison. A more convincing demonstration would have been to show that neuron-level probing outperforms probing at the residual stream (representation engineering), which would have provided stronger validation for the need for additional mechanistic interpretability approaches.\\n\\nI question whether papers that merely localize the same behaviors at different levels of granularity, without demonstrating novel insights or superior practical utility, represent sufficiently substantial contributions to warrant publication. \\n\\n### Regarding W3\\nI think there are two related but distinct claims here\\n\\n1. There is some sort of fundamental conflict between the neurons the model tends to use for helpful behavior, versus the neurons that the model tends to use for safety behavior.\\n2. Finetuning a base model on just helpfulness allows it to repurpose some of its safety neurons. Similarly, fine-tuning a base model on just safety allows it to repurpose some of its helpful neurons.\\n\\nThe results in the paper suggest 2, but do not prove 1. I think proving 1 will be a challenge, that will require a more careful methodology. For example, fine-tuning a model on both helpful and harmless behavior, and then patching its safety mechanism over to a model just trained on helpfulness, and measuring how safe it is. I think there is a lot of subtlety that could be discussed here.\\n\\n### Regarding W4\\nI think the authors should move Figure 10b, into Section 6. Without any additional context, it is hard to understand how good an accuracy of 76.2% is. I would have appreciated additional baselines here, such as probing directly on the residual stream. I would also appreciate error bars for the random neuron lines, considering that the margins are so small.\\n\\n\\n\\nWhile I appreciate the authors' effort to strengthen the paper, particularly with the addition of a more comprehensive related works section, my core concerns remain. Therefore, I maintain my score of 5.\"}",
"{\"comment\": \"Overall, I believe that the addition of the new experiments, particularly the HH-DPO patching and the residual stream probing, are valuable contributions to the paper. However, I still believe that my initial score is appropriate.\"}",
"{\"summary\": \"Focusing on the safety mechanism of LLMs, this paper proposes (1) inference-time activation contrasting, to locate safety neurons, and (2) dynamic activation patching, to evaluate their causal effects on model safety. The key observation is that only a few (5%) neurons contribute to the safety of the model. This paper also proposes applications of the observations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Understanding the safety mechanism of LLMs is a crucial research problem.\\n1. This paper focuses on various aspects of the proposed interpretability methods, including empirical observation on neurons, transferability, and potential application, making the contribution a comprehensive framework.\", \"weaknesses\": \"1. The presentation of this paper can be substantially improved. Many terms are not well explained in the paper, e.g. cost scores in Table 3, $(IA)^3$ in Section 4.1\\n1. The observation that a few safety neurons contribute to the safety of LLMs has already been spotted in some related work, but they are not cited and discussed.\\n - On Prompt-Driven Safeguarding for Large Language Models. ICML 2024\\n - Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications. ICML 2024\\n3. It seems that the 3 LLMs used are already aligned for safety (at least, to a certain degree) before they are released. What is the alignment in 4.1 here?\\n4. In my opinion, it would be necessary to include some advanced jailbreaking attacks for evaluation (both for the main observation and the application), since current LLMs can easily refuse to answer vanilla harmful questions.\\n5. Though evaluated 3 models, I still think the model scope is quite limited, e.g. all 3 models are in 7b size, but can the conclusion generalize to larger models?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I disagree with this claim\\n> Considering that the neuron-level interpretation has the unique advantage of providing mechanistic interpretations (showcased in the interpreting alignment tax part) over the representation interpretations like using the residual stream\\n\\nIt is also possible to perform patching over the residual activations, as identified by these two works [1] [2]. \\n\\n[1] Curt Tigges, Oskar John Hollinsworth, Atticus Geiger, & Neel Nanda. (2023). Linear Representations of Sentiment in Large Language Models.\\n\\n[2] Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, & Noah D. Goodman. (2024). Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations.\"}",
"{\"comment\": \"Thank you for your acknowledgment and for giving us the chance to explain. We are so sorry for the potential confusion caused by the original response, which is inaccurate and could be explained in a dangerous way like we are criticizing the related works. Here are our clarifications regarding your questions. To avoid spreading confusion, we have revised the last comment (the old version is visible in the revision history if needed) and the PDF.\\n\\n## Question 1: Clarification of \\\"Properties\\\"\\n\\n- Causal Effect: In this paper, the causal effect is measured by the formula in Equation 4.\\n- Sparsity: The sparsity in this work is measured by the proportion of neurons required to achieve a decent level of causal effect (e.g., 0.9) on model safety. In our main results, safety neurons constitute approximately 5% of all neurons.\\n- Transferability: Transferability refers to how well the safety neurons identified in one dataset also work on others. In our experiments, we identified safety neurons in Beavertails, and these neurons also work well on other red-teaming benchmarks, such as in Table 2.\\n- Stability on Training: This evaluates the consistency of safety neurons identified across models trained with different random seeds. We measured this using both neuron overlap and Spearman rank correlation, achieving values above 0.95 across the three model families. Additionally, the narrow error bars in Figure 2 also exhibit stability.\\n\\n## Question 2: Controlling Properties\\n\\nWe apologize for the imprecise expression. The properties here do not refer to the properties of safety neurons in Section 4. Here we use \\u201cproperties\\u201d to refer to the phenomena or properties of LLMs that we want to interpret by identifying neurons. For instance, the property corresponding to safety neurons in this context is model safety. In [7], the authors identified universal neurons that are responsible for \\u201cproperties\\u201d like alphabet, position, suppression, etc. However, the method described in [7] does not allow specifying the \\u201cproperties\\u201d to be interpreted before identifying neurons, and thus cannot be directly applied to the goal of this work, i.e., interpreting model safety.\\n \\n## Question 3: Comparison with [6]\\n\\nFirst, we need to clarify that the \\u201creproducibility issue\\u201d definitely does not mean there are difficulties in reproducing the original results from our side. It is a bad expression for \\u201cthe method in [6] cannot be directly reused in our work\\u201d, and we deeply apologize for the horrible implications. In [6], the authors identified entropy neurons by searching for neurons with a high weight norm and minimal impact on the logits. The underlying assumption is that these neurons act as a near-constant addition to all logits before the softmax, resulting in a minimal effect on output probabilities while increasing their entropy.\\n\\nFor model safety, however, we lack such a clear mechanistic intuition about how safety-related neurons should work. Therefore, our only assumption is that safety neurons should work in different ways between safety-aligned and unaligned models, and we design the inference-time activation contrasting to identify them.\"}",
"{\"summary\": \"## Summary\\n\\nThe authors propose methods to identify \\\"safety neurons\\\" within large language models (LLMs) that are responsible for safety behaviors. They introduce \\\"inference-time activation contrasting\\\" to pinpoint neurons active in aligned models but inactive in unaligned ones, and \\\"dynamic activation patching\\\" to assess the causal impact of these neurons on safety. These findings suggest a pathway toward more controlled and robust alignment of LLMs with human values and safety requirements.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"## Strengths\", \"The topic of LLM safety is highly relevant and timely.\", \"The paper makes solid contributions by:\", \"Identifying safety neurons in three open-source LLMs.\", \"Proposing an effective safeguard application.\"], \"weaknesses\": [\"## Weaknesses\", \"Novelty Concerns: The novelty of the proposed approach is unclear. Previous studies have investigated critical neurons within LLMs. The authors should clarify how their methods differ from or improve upon existing approaches.\", \"Limited Discussion: The paper lacks a sufficient discussion on how the proposed methods relate to existing representation engineering techniques (https://arxiv.org/pdf/2310.01405). A deeper comparison would help contextualize their contributions.\"], \"questions\": [\"## Questions:\", \"How does the proposed approach for identifying \\\"safety neurons\\\" differ from prior methods that target other types of critical neurons in LLMs?\", \"Can the \\\"dynamic activation patching\\\" method be generalized to other alignment applications, such as aligning models with values beyond safety (e.g., fairness)?\", \"Do you find any mechanistic insight? For example, did you observe specific patterns among the \\\"safety neurons\\\" related to particular types of safety risks, such as misinformation or toxicity?\", \"For safeguard applications, what is the overhead of your proposed approach?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response by the authors to Reviewer 9rqA\", \"comment\": \"Thank you for your constructive and valuable comments. Here is our response to the weaknesses and questions.\\n\\n\\n## W1 Lack of comprehensive discussion\\n\\nPlease refer to our general response.\\n\\n## W2 & Q1 Why focus on neurons?\\n\\n1. From the perspective of research objectives: Our goal is to develop a mechanistic understanding of LLM safety. The residual stream studied in RepE represents the combined effects of attention heads and MLPs. Understanding **how these effects are formed requires digging deeper into the MI perspective**. Since MLP neurons account for approximately two-thirds of the model\\u2019s parameters and serve as the basic functional units, we chose neurons as the focus of our research.\\n2. From a technical perspective: Compared to attention heads, identifying specific neurons presents a greater technical challenge due to their vastly larger quantity. For instance, in LLaMA2-7B, the number of neurons is about **340 times** that of attention heads, and neurons often function in combinations, making it difficult to pinpoint them using DFA methods like those in [1]. These challenges motivated us to prioritize studying neuron findings. Of course, we **do not claim that neurons alone provide a complete understanding of the safety mechanism**. Given the complexity of safety, it likely requires the joint participation of neurons and attention heads. Additionally, the methods proposed in this paper can also be applied to identify attention heads, facilitating further exploration of their interactions, which we consider an interesting direction for future work.\\n3. Regarding related work on attention heads: [2] was published **after our submission**, while [1] identifies attention heads correlated with refusal directions from the RepE perspective but does not verify their **causal effects on safety**. This focus differs from the scope of our study.\\n\\n## W3 & Q2 Evidence provided in favor of the alignment tax hypothesis is limited\\n\\nOur experiments (e.g. the curves in Figure 4(b)) rely on testing the causal effects of **different neurons** (e.g., safety, helpfulness, reasoning, etc.) on model safety and helpfulness, and **do not involve transferring other abilities**. The effectiveness of dynamic activation patching in transferring safety and helpfulness has been validated by previous experiments such as Figure 2 and Table 2. Therefore, we cannot see another alternative interpretation of our experimental results. We are willing to address your concerns but are unsure about the reasoning behind your interpretation or the specific experiments you propose. We would really appreciate it if you could elaborate more, like describing the suggested experiments in detail, and we are more than happy to verify it.\\n\\n\\n## W4 & Q3 The classification results in Section 6 are very misleading\\n\\nThank you for your valuable suggestions. We agree that the margin of the original results is not clear enough. To more comprehensively verify whether safety neurons encode more safety-related information compared to random neurons, we conducted additional experiments:\\n\\n1. For the datasets used in the experiments, we selected one dataset as the training set and merged the others as a single test set at a time, averaging the results across all rotations.\\n2. We **excluded safety neurons** from the randomly sampled set.\\n3. We added a group of random neurons sampled from all layers, as following the **layer distribution of safety neurons** may inherently carry safety-related information.\\n\\nThe updated results have been added to the appendix, and we revised the corresponding descriptions in the main paper to be more precise. Below is a brief summary of the results and our explanations:\\n\\n| | 150 neurons | 1500 neurons |\\n|--------------------|-------------|--------------|\\n| safety neuron | **71.1** | **76.2** |\\n| random neuron last layer | 67.7 | 74.2 |\\n| random neuron same distribution | 68.3 | 74.8 |\\n| random neuron all layers | 67.0 | 74.7 |\\n\\nFrom the results, we observe that safety neurons are indeed more effective than random neurons in predictions. Additionally, random neurons with the same layer distribution as safety neurons are more effective than those sampled from other layers, which indicates the layer distribution of safety neurons may also encode safety information. This may partially explain the results in Appendix E. We sincerely apologize for our oversight and thank you for pointing this out.\\n\\nLastly, we would like to note that the differences in prediction performance are not very significant, which may be due to the following reasons:\\n1. Safety neurons may not **directly encode information** about whether harmful content will be generated but instead exert their effects through subsequent components.\\n2. Random neurons may still **receive information from safety neurons**.\\n\\nWe plan to further investigate these aspects in future work.\"}",
"{\"title\": \"General Response\", \"comment\": \"We sincerely thank the reviewers for their thoughtful feedback and constructive suggestions. We have revised the paper based on the comments provided, with all changes highlighted in blue for clarity. We note that all the reviewers commented about our novelty and relationship to prior works. Below, we give a discussion on this:\\n\\nFirst of all, we believe that the safety mechanism of LLMs is an important topic that is far from being solved. Therefore, it is worthwhile to have multiple papers working on this topic. Existing interpretability research on LLM safety can be broadly categorized into two perspectives: **Representation Engineering (RepE)** and **Mechanistic Interpretability (MI)**. We acknowledge the importance of RepE-focused studies, as they often demonstrate strong practical effectiveness in steering model behavior. For instance, [3][4] are firmly grounded in the RepE perspective, and [1][6] also incorporate some perspectives of this approach. \\n\\nIn contrast, **our work adopts the MI perspective**, which seeks a bottom-up understanding of models\\u2019 inner workings. This perspective emphasizes the importance of localizing model functionality to the most fundamental operational units\\u2014a core principle of mechanistic interpretability. In the case of transformers, MLP neurons constitute approximately two-thirds of the model's parameters and serve as the foundational units for functionality. Therefore, we focus our study on neurons as the target of analysis to uncover safety mechanisms.\\n\\nFor articles categorized under the MI perspective, [1] has been discussed in our paper, where we point out that toxicity is an incomplete part of model safety concerned in our work, a view also acknowledged in Strength 2 of Review 9rqA and recent work [7]. [2] adopts a different definition of \\u201cneuron\\u201d, which describes individual parameters rather than complete functional units in this paper. Since features in transformers are usually represented as vectors, it is difficult to interpret how different parameters in a single vector play different mechanistic roles. [5] is a work published after our submission, and we could not include it in the paper. We believe that the functionalities of neurons and attention heads are not in conflict; instead, complex functions like safety are more likely to result from their collaboration. We plan to further explore their relationship in future work. [6] adopts a safety layer perspective, which we consider too coarse-grained compared to neurons and attention heads for providing a mechanistic understanding.\\n\\nThanks again for referring to the related works. We have added the discussions in the revision.\\n\\n[1] Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, & Rada Mihalcea. (2024). A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.\\n\\n[2] Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, & Peter Henderson. (2024). Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications.\\n\\n[3] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, & Dan Hendrycks. (2023). Representation Engineering: A Top-Down Approach to AI Transparency.\\n\\n[4] Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, & Neel Nanda. (2024). Refusal in Language Models Is Mediated by a Single Direction.\\n\\n[5] Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Kun Wang, Yang Liu, Junfeng Fang, & Yongbin Li. (2024). On the Role of Attention Heads in Large Language Model Safety.\\n\\n[6] Shen Li, Liuyi Yao, Lan Zhang, & Yaliang Li. (2024). Safety Layers in Aligned Large Language Models: The Key to LLM Security.\\n\\n[7] Yushi Yang, Filip Sondej, Harry Mayne, & Adam Mahdi. (2024). Ablation is Not Enough to Emulate DPO: How Neuron Dynamics Drive Toxicity Reduction.\"}",
"{\"title\": \"Response by the authors to Reviewer HaLN\", \"comment\": \"Thank you for your constructive and valuable comments. Here is our response to the weaknesses and questions.\\n\\n## W1 & Q1 How does our method differ from prior ones that target other types of critical neurons?\\n\\nPlease refer to Section 7 of the revised paper (Lines 499-516) for discussions on related neuron-based works.\\n\\nAlso please refer to our general response for discussions on related works on interpreting safety. Thanks for the question and we will add the discussions if more space is permitted.\\n\\n## W2 Limited Discussion\\nPlease refer to our general response.\\n\\n## Q2 Can the \\\"dynamic activation patching\\\" method be generalized\\nDynamic activation patching is task-agnostic. At the very least, Figure 4(b) in our paper demonstrates its effectiveness in altering the model\\u2019s helpfulness. As for whether it can be extended to other aspects of value alignment, we believe it is possible and this is an interesting direction for future work.\\n\\n## Q3 Mechanistic insight from safety neuron\\nOur current mechanistic insight suggests that safety and helpfulness may share the same set of neurons but exhibit different activation patterns on these neurons, which could potentially explain the alignment tax phenomenon. We think your suggestion about exploring patterns among the \\\"safety neurons\\\" related to specific types of safety risks to be a very interesting perspective. We plan to investigate this further in future work and sincerely thank you for proposing this valuable direction.\\n\\n## Q4 Overhead of safeguard applications\\nThe overhead of our safeguard mechanism primarily comes from a logistic regression classifier. When using activations from only 1,500 neurons, this requires merely computing the **inner product of 1,500-dimensional vectors**, which is negligible compared to the billions of parameters in an LLM. In fact, if certain outputs can be rejected early, the process could even accelerate generation. Based on our measurements, the classification step takes less than 0.01 seconds, accounting for **less than 1/2500 of the total inference time**.\\n\\n[1] Xiaozhi Wang, Kaiyue Wen, Zhengyan Zhang, Lei Hou, Zhiyuan Liu, & Juanzi Li. (2022). Finding Skill Neurons in Pre-Trained Transformer-Based Language Models.\\n\\n[2] Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, & Furu Wei. (2022). Knowledge Neurons in Pretrained Transformers.\\n\\n[3] Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, & Dimitris Bertsimas. (2023). Finding Neurons in a Haystack: Case Studies with Sparse Probing.\\n\\n[4] Zeping Yu, & Sophia Ananiadou. (2024). Neuron-Level Knowledge Attribution in Large Language Models.\\n\\n[5] Ameen Ali, Lior Wolf, & Ivan Titov. (2024). Mitigating Copy Bias in In-Context Learning through Neuron Pruning.\\n\\n[6] Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Belinkov, Xingyi Song, Mrinmaya Sachan, & Neel Nanda. (2024). Confidence Regulation Neurons in Language Models.\\n\\n[7] Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, & Dimitris Bertsimas. (2024). Universal Neurons in GPT2 Language Models.\"}",
"{\"title\": \"Response by the authors to Reviewer xkYp\", \"comment\": \"Thank you for your constructive and valuable comments. Here is our response to the weaknesses and questions.\\n\\n## W1 The presentation of this paper can be substantially improved\\n\\nThank you for your suggestion, and we apologize for any unclear statements. We have revised the corresponding sections in the newly submitted version to improve clarity.\\n\\n## W2 Safety neurons in related work\\n\\nPlease refer to our general response. We also added a discussion on this aspect in the related work section of our newly uploaded paper.\\n\\n## W3 What is the alignment in 4.1\\uff1f\\n\\nWe agree that the released **chat- or instruct- versions** of current LLMs are generally safe, but to conduct experiments in a more controllable setting, we chose to begin alignment from the **pre-trained base models**. As mentioned at the **end of Section 3**, we refer to \\\"the pre-trained LLMs before SFT (denoted as Base).\\\" Additionally, the names used in Section 4.1, such as Mistral-7b-v0.1, refer to the **official model names**, not the abbreviated versions of the aligned chat models. Sorry for the potential misunderstandings. The results in Table 2 show that the base models used in this paper are not safe enough and our alignment improves safety.\\n\\n## W4 It is necessary to include some advanced jailbreaking attacks for evaluation\\n\\nWe believe our practice is appropriate since using red teaming benchmarks to evaluate model safety is a very common setting used by existing works, such as the two papers mentioned in your review. Moreover, similar to the misunderstanding in Weakness #3, we agree that current LLMs can generally refuse harmful questions, but this is the case for chat- and instruct- models. For the based models used in our experiments, they are clearly not safe enough in the adopted benchmarks (shown in Table 2).\\n\\n## W5 Can the conclusion generalize to larger models?\\n\\nIn general, we believe it is a common practice to use this model size in interpretability research [1-5], but we agree this is a valid concern and are happy to add more experiments. Due to the more computation and limited resources, more experiments are still running. We provide the results for Llama2-13B below (the format is similar to Table 2), which show similar trends to the original experiments in the paper.\\n\\n| Llama2-13B | BT | RT | GSM | BBH | MMLU | TQA |\\n|------------|-------|-------|-------|--------|-------|-------|\\n| Base | -4.5 | -4.0 | 0.22 | 0.151 | 0.507 | 0.268 |\\n| Base* | -8.7 | -8.4 | 0.2 | 0.142 | 0.483 | 0.272 |\\n| SFT | -7.5 | -5.8 | 0.165 | 0.133 | 0.525 | 0.268 |\\n| SFT* | -11.2 | -10.3 | 0.165 | 0.132 | 0.528 | 0.278 |\\n| DPO | -12.2 | -11.2 | 0.185 | 0.122 | 0.520 | 0.288 |\\n\\n\\n\\n[1] Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Kun Wang, Yang Liu, Junfeng Fang, & Yongbin Li. (2024). On the Role of Attention Heads in Large Language Model Safety.\\n\\n[2] Shen Li, Liuyi Yao, Lan Zhang, & Yaliang Li. (2024). Safety Layers in Aligned Large Language Models: The Key to LLM Security.\\n\\n[3] Zeping Yu, & Sophia Ananiadou. (2024). Neuron-Level Knowledge Attribution in Large Language Models.\\n\\n[4] Ameen Ali, Lior Wolf, & Ivan Titov. (2024). Mitigating Copy Bias in In-Context Learning through Neuron Pruning.\\n\\n[5] Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Belinkov, Xingyi Song, Mrinmaya Sachan, & Neel Nanda. (2024). Confidence Regulation Neurons in Language Models.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"References\", \"comment\": \"[1] Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, & Neel Nanda. (2024). Refusal in Language Models Is Mediated by a Single Direction.\\n\\n[2] Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Kun Wang, Yang Liu, Junfeng Fang, & Yongbin Li. (2024). On the Role of Attention Heads in Large Language Model Safety.\"}"
]
} |
1Njl73JKjB | Towards Principled Evaluations of Sparse Autoencoders for Interpretability and Control | [
"Aleksandar Makelov",
"Georg Lange",
"Neel Nanda"
] | Disentangling model activations into human-interpretable features is a central
problem in interpretability. Sparse autoencoders (SAEs) have recently attracted
much attention as a scalable unsupervised approach to this problem. However, our
imprecise understanding of ground-truth features in realistic scenarios makes it
difficult to measure the success of SAEs. To address this challenge, we propose
to evaluate SAEs on specific tasks by comparing them to supervised
feature dictionaries computed with knowledge of the concepts relevant to the
task.
Specifically, we suggest that it is possible to (1) compute supervised sparse
feature dictionaries that disentangle model computations for a specific task;
(2) use them to evaluate and contextualize the degree of disentanglement and
control offered by SAE latents on this task. Importantly, we can do this in a
way that is agnostic to whether the SAEs have learned the exact ground-truth
features or a different but similarly useful representation.
As a case study, we apply this framework to the indirect object identification
(IOI) task using GPT-2 Small, with SAEs trained on either the IOI or OpenWebText
datasets. We find that SAEs capture interpretable features for the IOI task, and
that more recent SAE variants such as Gated SAEs and Top-K SAEs are competitive
with supervised features in terms of disentanglement and control over the model.
We also exhibit, through this setup and toy models, some qualitative phenomena
in SAE training illustrating feature splitting and the role of feature
magnitudes in solutions preferred by SAEs. | [
"mechanistic interpretability",
"sparse autoencoders",
"evaluations"
] | Accept (Poster) | https://openreview.net/pdf?id=1Njl73JKjB | https://openreview.net/forum?id=1Njl73JKjB | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"v5xmE7Qi9c",
"ttFGrS6I5I",
"ogsq7v1ter",
"mzdci0ca1h",
"l8F7J5lo5B",
"ip7P7wfWl8",
"cCZ8f2yWqG",
"aPRKWFFgew",
"Z2uwGuyZKP",
"U9JLX3uOVd",
"SELCgg2gOm",
"OJrzR9oVUb",
"Lj4sReiopC",
"Lge3wRaksU",
"LYPzWKSjnY",
"LE4BySMZus",
"GAnwjBHnC0",
"81EJ8tFGn2",
"6MtvvN79yJ",
"3Sm97xaUtt",
"3Fh1ANhB3y",
"0BDW41XtiK"
],
"note_type": [
"meta_review",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1734031341769,
1730599806002,
1737524218146,
1732530845472,
1732665614896,
1732412899539,
1730568897845,
1732413236443,
1732508648572,
1732412994941,
1732611910246,
1732652296376,
1732663579742,
1732563615837,
1732530964227,
1732413359795,
1732575048657,
1732413051585,
1732667572094,
1732413148114,
1730850184949,
1730315827176
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12831/Area_Chair_sqCg"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_S53N"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_TRbA"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_w15N"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_w15N"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_S53N"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_TRbA"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_jgpT"
],
[
"ICLR.cc/2025/Conference/Submission12831/Reviewer_TRbA"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper explores sparse autoencoders on language tasks that have a known ground truth to evaluate whether SAEs can provide similar interpretability and control as supervised feature dictionaries.\\n\\nSince most language tasks do not have a single ground truth, the authors originally used only one highly specialized experimental setting. Reviewers found this to be lacking, and also gave advice on how to improve the presentation, and better communicate the limitations of the work. After discussion, the authors heavily revised their presentation, included better discussion of limitations, and added additional experimental settings. All these changes were well-received by the reviewers, and the paper now stands as a clear and interesting contribution to studies on interpretability techniques for language models. This is a valuable research direction, and one in desperate need of more investigation by the community.\\n\\nAs such, I am recommending acceptance as a Spotlight.\", \"additional_comments_on_reviewer_discussion\": \"The main points of concern raised by reviewers were: lack of diverse experiments beyond the IOI dataset; poor presentation and writing; lacking discussion of limitations; questions about the experimental methodology; and other minor concerns.\\n\\nThrough the discussion (especially with Reviewer TRbA) the authors heavily edited the paper, including major rework to the presentation, inclusion of limitations, and new experiments including on new datasets. This was well received by reviewers, some of which greatly increased their scores. The major concerns have been resolved.\"}",
"{\"summary\": \"This paper studies the Sparse autoencoders to capture interpretable features for the IOItask and the expeirment results show that the proposed approach achieves the best performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is well-written and easy to follow.\", \"weaknesses\": \"There are several weaknesses:\\n\\n1. The motivation of this section should be enhanced.\\n\\n2. The English language should be improved.\\n\\n3. The main idea seems not very novel. This paper should provide a strong motivation.\\n\\n4. The experiment can be further improved by providing more results and analysis.\", \"questions\": \"Please see the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Before this phase of the discussion period ends, we wanted to ask the reviewer whether we have addressed your concerns with our work?\\n\\nWe believe we have addressed the concerns you raised regarding motivation, clarity of language, and including sufficient technical detail.\"}",
"{\"title\": \"Response to author\", \"comment\": \"Thanks to the authors for the quick reply. I think it addresses most of the points I raised. I appreciate the changes made. I especially like the change in the notation in both the sufficiency and necessity section. I admit that I missed the \\\"orange bar\\\" in pointer of 4.3.\\n\\nAlso thanks for shortening section 5.5. I think now the presentation is quite neat.\\n\\nFor the generalization to other tasks, I think I have not made my point clear. In the cases that we actually do not know the attributes of the tasks, how can we apply this method? The \\\"other tasks\\\" described in this papers are simple \\\"greater than\\\" and \\\"both\\\". But it is possible that, say, a QA dataset and we train an SAE using some inner layer embedding. How can we evaluate that? I am not looking for an answer here - but only as a discussion point.\", \"final_comments\": \"I think the authors did a great job in tackling various problems and running experiments in the field. Due to the page limit, it is not uncommon that to include all the methods and results in a paper. Unfortunately this reduces the readability and the focus of the paper. The reader will find the ideas of the paper all over the place. The latest edit did make the paper much more readable. \\n\\nThanks for the work.\"}",
"{\"title\": \"Top-level comment on motivation/novelty\", \"comment\": \"We thank all reviewers for their feedback. Since the unclear presentation of the paper's motivation and novelty were shared concerns for multiple reviewers, we would like to respond with a top-level comment addressing this concern.\\n\\nWe have (in the attached revision) **reworked the introduction and related work sections of the paper to emphasize the motivation of our work and what sets it apart from other works in this area** (changes marked in red). In brief:\\n- the research community has invested a lot of effort in the SAE paradigm; the main goal of this paradigm is to recover features the model uses to perform computations on various tasks.\\n- However, prior evaluations have focused on *proxy metrics* that we only hope correlate with recovery of these features\\n- The novelty of our work is, by contrast, in being the first to compare SAEs to carefully validated approximations of ground-truth features in a realistic linguistic task (as well as two other tasks we added during the discussion period to showcase the flexibility of our framework; see the top-level comment on generalizability of results). Moreover, these ground-truth features provide a skyline of a supervised dictionary trained to predict the labels of features discovered by prior work, which allows a fair comparison grounded in the limits of the representational power of SAEs, and allows us to contextualize the outputs of our metrics. \\n\\nWhile we have added two other tasks to demonstrate broader applicability (see other top comment), we believe our framework provides significant value even with a limited task selection. Practitioners often need to evaluate SAEs across hyperparameter sweeps [1] or compare new architectures [2,3,4], and the IOI task better represents realistic downstream applications than standard proxy metrics (which we discuss in the related work of the paper). Even though IOI is a special case where prior work identified relevant features, evaluating full-distribution SAEs on this task provides evidence for their effectiveness on tasks where ground truth is unknown. The same applies to evaluating SAE training methods via task-specific SAEs.\\n\\nWe thank the reviewers for pointing out the poor readability of the paper's motivation, and hope that our revisions and comments will mitigate this issue.\\n\\n[1] Lieberum, Tom, et al. \\\"Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2.\\\" *arXiv preprint arXiv:2408.05147* (2024).\\n\\n[2] Rajamanoharan, Senthooran, et al. \\\"Jumping ahead: Improving reconstruction fidelity with jumprelu sparse autoencoders.\\\" *arXiv preprint arXiv:2407.14435* (2024).\\n\\n[3] Gao, Leo, et al. \\\"Scaling and evaluating sparse autoencoders.\\\" *arXiv preprint arXiv:2406.04093* (2024).\\n\\n[4] Rajamanoharan, Senthooran, et al. \\\"Improving dictionary learning with gated sparse autoencoders.\\\" *arXiv preprint arXiv:2404.16014* (2024).\"}",
"{\"summary\": \"This paper focuses on evaluating sparse autoencoders (SAEs) for their ability to recover known ground-truth features learned by a model. To do so, the authors first train a supervised dictionary on the indirect object identification (IOI) task, for which model computation is already relatively known due to prior interpretability and circuit discovery work. Both IOI task-specific SAEs and full-distribution SAEs are trained and evaluated with respect to the supervised dictionaries to understand if SAEs allow for the same level of approximation, control, and interpretability as the supervised dictionaries. Results reveal that more recent SAE architectures improve these capabilities and task-specific SAEs are much more effective than full-distribution SAEs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The writing and paper structure are very clear and easy-to-follow.\", \"The focus of this paper is highly relevant and interesting. The problem of finding a ground truth with which to evaluate interpretability and explainability methods has remained an issue for decades, and this work works towards solving this problem by exploring using human-generated groundtruths that have been backed up by prior work.\", \"The experiments are well-defined, inutitive, and easy to understand.\", \"I believe the results are interesting and useful - they reveal that task-specific SAEs are more useful in practice than full-distribution SAEs, hinting that data quality is of utmost importance when training SAEs. Further, this suggests that human priors may be useful when developing interpretability methods/SAEs.\"], \"weaknesses\": [\"I find the \\u201cWhy use these attributes?\\u201d paragraph in Section 3.1 to be confusing. If prior work had not proposed the IO, S, and Pos attributes, how would one go about defining the supervised dictionary? If the evaluation pipeline described in this paper were to be used for a different task, I\\u2019m not sure whether this section would be generalizable. In particular, when there are many choices of attributes, what is the manner of choosing between them without using another interpretability method, which would then prevent the need of using an SAE in the first place?\", \"It would have been significantly more convincing to me if the authors had considered more than one task in their evaluation. At the moment, it\\u2019s unclear to me how the proposed methodology and results from this work could be applied to future works that want to evaluate trained SAEs.\", \"The section on interpretability (section 4.4) is also a bit confusing to me - I would find it very helpful if the authors provided interpretations of the SAE latents, and a visualization of how these features could then be used to explain the LLM\\u2019s computation on a single example. Some examples of /possible/ interpretations are provided in Appendix 7.13-7.14, but if I understand correctly these are not the actual labels of the SAE features.\", \"It is my understanding that the authors wish to propose the use od supervised dictionaries an evaluatory baselines for SAEs. However, in practice, this paper reads more as an exploration of whether SAEs can recover the IOI task. While the authors discuss the limitations of hardcoding the attributes to compare SAEs against and only considering a single task and model, I believe these drawbacks fundamentally limit the work in its general utility.\"], \"questions\": [\"In section 4.3, if I understand correctly, the SAE latents are found by simply optimizing/searching for the features that perform the task (move one latent to the specified counterfactual latent). This seems a bit roundabout to me - wouldn\\u2019t this propose that you need to know the features you are looking for in order to label SAE features? How would one do this searching or interpretation without access to the counterfactual activations? Wouldn\\u2019t it be more realistic to interpret or label each SAE feature and then use the features that are labelled to be relevant to the task at hand?\", \"Please see the above weaknesses!\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"Dear reviewer,\\n\\nThank you for your detailed and thoughtful comments. \\n\\nWe note that a central concern in your review is the restricted nature of our evaluation (considering only the IOI task with a single model and dataset). To this end, we have applied our framework with minimal changes to a new setup combining a new model, new pre-training dataset, and new linguistic task, for the purpose of evaluating SAEs trained on the full pre-training dataset of the LLM being analyzed. Please refer to our top-level comment on generalizability of results for more details.\", \"to_address_your_other_concerns\": [\"While we agree that focusing on IOI may seem narrow, we believe it serves as a valuable proxy for evaluating SAEs more broadly. Practitioners often need to evaluate SAEs across hyperparameter sweeps or compare new architectures, and the IOI task better represents realistic downstream applications than standard proxy metrics. Importantly, even though IOI is a special case where prior work identified relevant features, evaluating full-distribution SAEs on this task provides evidence for their effectiveness on tasks where ground truth is unknown.\", \"Regarding how attributes are chosen: you say that \\\"In particular, when there are many choices of attributes, what is the manner of choosing between them without using another interpretability method, which would then prevent the need of using an SAE in the first place?\\\" - however, this objection runs orthogonal to the goal of our paper.\", \"Specifically, our goal is to, through whatever means, find a situation in which we can have an *independent* ground-truth evaluation of SAE quality. The prior work on IOI gives us (through lots of manual effort) a good understanding of the IOI circuit, which allows us to set up an independent evaluation by comparing features in the IOI circuit (found via non-SAE means) to SAE latents.\", \"In our newly added task, we have shown that we can construct supervised features like we do for IOI (via conditional expectation over attribute values), and we check that the supervised features disentangle two relevant properties for the task. This serves as validation that the supervised features are relevant to the model's computation (similar to IO, S and Pos).\", \"You say \\\"Some examples of /possible/ interpretations are provided in Appendix 7.13-7.14, but if I understand correctly these are not the actual labels of the SAE features.\\\" - indeed, the labels of SAE latents in figure 5 *are* the ones from Appendix 7.13; we are sorry for the misunderstanding!\", \"You say \\\"How would one do this searching or interpretation without access to the counterfactual activations? Wouldn't it be more realistic to interpret or label each SAE feature and then use the features that are labelled to be relevant to the task at hand?\\\" - we are sympathetic to your objection; some qualifications:\", \"The reason we set things up in this way is that we first ran an extensive validation showing that IO, S and Pos are indeed meaningful for the model's computation on IOI. In this sense, it makes sense from a human perspective to want to see how well SAE latents can control these attributes.\", \"We agree this is somewhat biased, and have tried to be agnostic to whether SAEs represent these features in a 1-to-1 way or some more complex pattern. Still, this does not fully address your concern. We hope that future work can move towards an even more agnostic evaluation.\", \"Please let us know if these responses address your concerns. We remain available to answer questions throughout the discussion period.\"]}",
"{\"comment\": \"Thank you to the authors for their thoughtful rebuttal and additional experiments.\\n\\n* Regarding your second point, if your goal is to find independent ground truth evals, but the development of the ground truth comes at the cost of \\\"lots of manual effort,\\\" it is still difficult for me to determine how useful your proposed approach is for evaluation of future models. Wouldn't this same manual effort be required for each task a new model is to be evaluated on? I believe this (the necessary manual labor required to construct the ground truth baselines) should be discussed as a key limitation of the work and made clear in the intro if possible. \\n\\n* Regarding your fourth/last point, I understand why you made this choice and I appreciate that you acknowledge it in the rebuttal, but I believe this too should be noted as a limitation of the work in the main paper. \\n\\nContingent upon the above, I will raise my score to a 6. Thank you!\"}",
"{\"title\": \"Top-level comment on generalizability of results\", \"comment\": \"We thank all reviewers for their feedback. Since the seemingly restricted nature of the paper's evaluation (which is based only on the IOI task) was a shared concern of most reviewers, we would like to respond with a top-level comment addressing this.\\n\\nWe have (in the attached revision) **added a new appendix section (as well as relevant pointers from the main text) on how our methods can be straightforwardly extended to a new natural language task we introduce on a different model and distribution, to test an SAEs ability to independently represent and disentangle different concepts**. We additionally apply our methods to another task from the mechanistic interpretability literature that has been studied in a way similar to IOI: the greater-than task [1]\", \"in_brief\": [\"We generalize the setup of the paper in multiple ways:\", \"**dataset and model**: we use the Tiny Stories dataset and the 4-layer 33M parameter model from the Tiny Stories paper (https://arxiv.org/abs/2305.07759)\", \"**full-distribution SAEs**: we use TopK SAEs trained on the full set of Tiny Stories activations over the 4 residual streams of the model (the dataset has on the order of 100s of millions of tokens)\", \"We consider sentences of the form \\\"NAME really loves ANIMALS. NAME also really loves SPORT. So NAME really loves both\\\"\", \"For example, \\\"Lily really loves cats. Lily also really loves basketball. So Lily really loves both\\\" should be followed by \\\"cats and basketball\\\" or \\\"basketball and cats\\\", so the model should be able to put high probability on both cats and basketball coming next. A well functioning SAE should be able to disentangle these, ie we should be able to damage the model's ability to say cats but not basketball, and vice versa\", \"We consider several values for each of NAME, ANIMALS and SPORT to sample a dataset of 1k prompts of this form\", \"Intuitively, at the \\\"both\\\" token, the model \\\"prepares\\\" to output either the ANIMALS or SPORT. We verify experimentally that the next-token probability distribution after \\\"both\\\" prefers the correct SPORT compared to other sports, as well as the correct ANIMALS compared to other animals.\", \"We turn this into a task to which we apply (key parts of) our framework. Namely:\", \"We compute supervised features corresponding to each given SPORT and ANIMALS value by taking conditional expectations (like in the paper)\", \"We check that the supervised features work to edit the logit distribution in the expected way\", \"We compare this to SAEs via a sparse control test to change the value of one variable at a time via an SAE latent exchange.\", \"This setup also allows us to measure the \\\"disentanglement\\\" between the two attributes (sport and animals) afforded by the SAE latents.\", \"Overall, we see that our framework applies with minimal modification to this different setting, and yields interesting results about SAEs.\", \"We thank the reviewers for pointing out this weakness in the presentation of our paper, and hope that this new addition will serve as a convincing example of the generality of our evaluation framework.\", \"[1] Hanna, Michael, Ollie Liu, and Alexandre Variengien. \\\"How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model.\\\" Advances in Neural Information Processing Systems 36 (2024).\"]}",
"{\"title\": \"The response\", \"comment\": \"Thank you for your feedbacks, which address my all concenrs. I will keep my previous score.\"}",
"{\"title\": \"Response by authors\", \"comment\": \"Thank you for confirming that our response has addressed all your concerns. Given that the concerns that led to your initial score have been resolved, we would greatly appreciate understanding what additional improvements would help you view the paper more favorably.\"}",
"{\"title\": \"Implementing and discussing suggestions provided by reviewer\", \"comment\": [\"Thank you for these great suggestions and the very constructive discussion so far! Upon a re-read, we agree your proposed edits make a lot of sense, and we have implemented ~all of them in the latest revision. A few notes on what we changed, as well as what we didn't change and why:\", \"we added an explicit note in the $F_1$ score's definition that $A$ refers to an interpretation that we wish to evaluate\", \"regarding the reminder about full vs task SAEs in 4.3, we feel this is unwarranted, as this section does not deal with SAEs at all; we feel that pointing the reader to the relevant bars in the figure (referred in the text by color) should be less confusing.\", \"we introduced lightweight notation for the logit difference under an intervention to better explain the necessity formula\", \"we re-used this notation to explain the sufficiency formula, which in turn greatly shortened this section while noticeably improving the presentation. **Thank you very much for this astute observation!**\", \"as suggested, we renamed the part about interpretability of supervised features to \\\"Sparsity of supervised feature interactions\\\"\", \"we included in 5.3. a hypothesis about why task SAEs are better at full-distribution ones at sufficiency and necessity; briefly, we believe it's just because they're trained on a dataset much closer (in fact, precisely matching) to the task distribution under investigation, despite being a much smaller dataset than the full pre-training one.\", \"It has become a theme of other related works in the SAE literature that task SAEs may be a valuable addition to the interpretability toolkit, as they may surface domain-relevant features with low sample complexity.\", \"we shortened section 5.5. significantly, while retaining the key elements of our exploratory evaluation framework; we feel these are still valuable additions to the paper (hence we still keep 5.5)\", \"we expanded the limitations section a bit.\", \"we note that we had already added notes on how the method can be generalized to other tasks, and how we can target only specific attributes and model locations to reduce the amount of \\\"manual\\\" labor required to find good attributes\", \"however, also note that the sufficiency and necessity evaluations are independent of the choice of attributes, because they evaluate SAE reconstructions as a whole. Hence they readily generalize to other tasks (but also tell us nothing about disentanglement)\", \"Let us know if this addresses your concerns. We also remain available throughout the discussion period to discuss any further changes that improve your evaluation of the paper.\"]}",
"{\"title\": \"Response on implementing reviewer feedback\", \"comment\": \"We thank you for the productive discussion and your willingness to update your score in light of our responses. We agree with your feedback, and have implemented it in the latest revision of the paper. Specifically, we added:\\n- in the introduction: \\n\\n> A\\nlimitation of our approach is that it requires potentially substantial per-task effort to independently\\nidentify the relevant attributes, which is proportional to the complexity of the task and the number\\nof attributes we want to consider. However, as we show in Appendix 7.1, our framework allows for\\nthe targeted evaluation of a few attributes in a few locations of the model, which can substantially\\nreduce this effort.\\n\\n- in the discussion on limitations:\\n\\n> This influences many of our evaluations, in particular\\nthe \\u2018sparse control evaluation\\u2019 from Section 5.4, which relies on these attributes to compute the\\ncounterfactual activations used to evaluate edit accuracy.\\n\\nand\\n\\n> Finally, a third limitation is that it takes work to identify in a way independent of SAEs what good\\nattributes for a given task are. We show through two case studies in Appendix 7.1 that, by targeting\\nonly key attributes in a single activation location, we can reduce this effort substantially. This comes\\nat the cost of analyzing a more limited, but still potentially interesting slice of the task.\"}",
"{\"comment\": \"Before this phase of the discussion period ends, we wanted to ask the reviewer whether we have addressed your concerns with our work?\\n\\nWe thank you for the valuable feedback, especially regarding presentation, readability, and the structure of this work, and we believe we have now resolved these issues.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": [\"Dear Reviewer,\", \"Thank you for your detailed and thoughtful review. We especially appreciate your careful analysis of the paper's structure and technical content.\", \"**Regarding additional tasks**\", \"See our top-level comment on additional tasks, where we generalize to new models, datasets and tasks.\", \"**Regarding presentation and readability:** We have:\", \"moved some crucial formulae from the appendix to the main text (e.g., the necessity formula from Section 7.4)\", \"put the information important to our paper about the new SAE variants (TopK and Gated) and how they're trained in the main body\", \"extended the limitations section and added some directions for future work\", \"moved the related work section to directly follow the introduction\", \"added pointers from Figure 1's caption to relevant sections of the paper\", \"reorganized the interpretability sections for better coherence\", \"fixed the minor mistakes you pointed out\", \"moved one of the exploratory figures on interpretability to the appendix\", \"**Regarding technical issues and experimental results:**\", \"On sufficiency in Figure 3: We appreciate your point about averaging potentially hiding per-example changes. We have added a note of the distribution of changes in logit differences to address this concern; briefly, we observe empirically that logit differences are well-concentrated around their means.\", \"On necessity: We agree this important formula should be in the main text and have moved it there. However, we disagree with your characterization that showing necessity necessarily means the difference between 1) and 2) (in your terminology). Specifically, with our setup, a necessity score close to 1 indicates that removing our reconstructions degrades performance similarly to removing all task-relevant information via mean ablation, suggesting our features capture the essential computation. We think this is the meaningful way to evaluate whether the SAE error term contains information relevant for the task.\", \"On probing accuracy in Control: We have revised this section by adding results (figures contained in Appendix) that substantiate the claims made in this part of the paper.\", \"You said: \\\"This is because the probe is linear, which itself can \\\"disentagle\\\" the attributes. \\\". However, this indicates a misunderstanding of the experiment. What we check here is two things:\", \"whether a probe trained to predict e.g. **IO** will \\\"change its mind\\\" when we edit the **IO** attribute in an activation;\", \"Whether probes for the other attributes will keep their original predictions\", \"In other words, we use probes merely as one tool to measure the information encoded in activations. We use this to evaluate whether our interventions have the desired effect on the information encoded in the activations.\", \"**Regarding your questions:**\", \"**About the bias term:** yes, this is by design -\", \"It is well known that activations in LLMs may have non-zero average values over a dataset, especially when this dataset is very narrow/specialized compared to the full pre-training distribution. The bias term is a way to account for this, and let us focus on how activations differ between examples on the IOI task.\", \"As another justification, SAEs also have a similar bias term added to reconstructions. In this way, our supervised dictionaries imitate this design choice.\", \"**About F1 score**: you are correct; we have added more explanation about this (with examples) to the relevant parts of the paper.\", \"We are very grateful for your many detailed observations about our paper. Please let us know if these revisions address your concerns. We remain available to answer any additional questions during the discussion period.\"]}",
"{\"title\": \"Response to author\", \"comment\": [\"Thanks to the authors for addressing the comments. I especially appreciate the inclusion of the necessity formula being moved to the main text, adding some limitations and future work, as well as moving the exploratory figures to the appendix. I think the current presentation is better than before - but there are still places of minor improvements.\", \"*F1 score*. I still don't think the F1 score description in section 3 is clear enough. If I understand correctly, the F1 score is a measure for interpretability, which I think in the original comment it is out of place. The description of \\\"A\\\" is still not clear. Is it done automatically, or does it depend on the choice of human-defined subsets as in section 5.5? More comments on the sections on interpretability follows.\", \"*Figure 3*. The supervised/Task SAE/Full distribution SAE are introduced. The Task SAE and Full distribution SAE was slightly introduced in the introduction. I think adding a sentence reminding what Task SAE and full distribution SAE in the beginning of section 4.3 (just like the beginning of 5.1) will do.\", \"*Sufficiency*: I apologize that I did not raise this last time. I think for clarity, one should describe it in terms of formulae. For each p, we have a logit difference logitdiff(p) given by the difference in model logprobs for IO and S names. Thus, if we perform the intervention, we will have a different logit difference, say $logitdiff_{intervention}(p)$. From the text, it seems like what you were doing was to do $E[logitdiff_{intervention}(p)] / E[logitdiff(p)] \\\\approx E[logitdiff_{intervention}(p)] / 3.3$. (Also, are there absolute values any where?) Also, thanks for addressing the concerns of the per-example change.\", \"*Necessity*: Thanks for addressing the concern. I think the presentation has improved a lot. It would be nice to say it even clearer, if I understand correctly, as\", \"$a_{intervention}(p)$ = E(..) + error term,\", \"$a_{mean ablation}(p)$ = E(..),\", \"$a_{clean}(p) = a(p)$.\", \"Then the subscript in logitdiff will be the same as the \\\"a\\\"'s. In this case, we also don't have to use the left arrow to indicate assignments. Regarding the \\\"difference between 1 and 2\\\", I agree that the formula is a reasonable measure for necessity.\", \"*Probing accuracy*: Thanks for addressing the comments. I think I understand what you are doing in this section.\", \"*Bias term*: Thanks for addressing it in the comments.\", \"Upon a second reading, regarding the interpretability sections including the section at the end of section 4 as well as section 5.5,\", \"I think the interpretability at the end of section 4 is very interesting - and I think that alone can be a separate paper, with more experiment results. Thus one could remove this section and put it in the future work. The reason I think it is out of place is because a) this section is not related related to any of the other parts of the paper; b) the results are only shown in the appendix; and c) both the methods and the results are interesting on their own.\", \"Section 5.5 is less interesting (to me) compared to the end of section 4, but it is more related to the other parts of the paper. It is still slightly out of place because the methods and experiments in this section are not clearly described. (They are of course much more clearly described in the Appendix)\", \"Both sections are labelled \\\"interpretability\\\" under \\\"evaluating Sparse Feature dictionaries\\\" and \\\"evaluating SAE\\\", but they correspond to different methods. This is also confusing.\", \"My original suggestion was to remove both sections - or put it in the appendix. I still think this should be the case. However, in case the authors strongly prefer to keep these sections, I think they should at least rename the \\\"interpretability\\\" section at the end of section 4 to \\\"Feature-level Interactions\\\" instead.\", \"I also appreciate the addition of more discussions on limitations and future work. The removal of these interpretability sections also can benefit a more thorough discussion on the results and the limitations. For example,\", \"section 5.3, \\\"We find that vanilla task SAEs are good at sufficiency/necessity, but full-distribution SAEs are not.\\\". Are there any discussions or explanations on why this is the case?\", \"Expand on how to extend the proposed methods to datasets other than IOI. Thus, can we compute supervised feature dictionaries without knowing the set {IO, S, Pos}? Can we evaluate sufficiency and necessity using similar methods for SAE features?\", \"The above two questions are just suggestions of discussion items if the interpretability sections are removed.\"]}",
"{\"title\": \"Response to reviewer\", \"comment\": \"Dear reviewer,\\n\\nWe warmly thank you for your enthusiastic reception of our paper and your thoughtful comments. Regarding the relevance of our work to the literature (where you noted your expertise might be limited): ours is the first work to provide a thoroughly validated approximation of ground-truth features to evaluate SAEs against. We believe our evaluation framework provides important results for the IOI task, and we have also shown it can be readily generalized to other linguistic tasks. We have reworked the introduction and related work sections to more clearly situate our paper within the field of SAE evaluations, which we hope will help readers better understand our contribution.\\n\\nFor a concise summary of the main changes in the revision, please also consult our two top-level comments:\\n* **On novelty/motivation**: we describe briefly and clearly our contribution and how we re-worked the paper to present it better\\n* **On generalizability of results**: we give a concrete example with another task where we show that our approach for computing supervised dictionaries and evaluating SAEs works too.\", \"we_have_also_incorporated_your_helpful_suggestions\": [\"Moving the related work section to directly follow the introduction\", \"Adding pointers from Figure 1's caption to relevant parts of the paper\", \"Thank you again for your constructive feedback. We remain available to answer any questions.\"]}",
"{\"title\": \"Adding discussion on open-ended tasks\", \"comment\": \"Thanks again for continuing the constructive discussion in such a timely manner. We're very grateful for your feedback, which has greatly improved the clarity of our paper, and we're glad you find the recent changes a major improvement to the presentation.\", \"regarding_generalization_to_other_tasks\": \"oh, OK - sorry we didn't get that the first time. Yes, generalizing the methods here to open-ended tasks is a very valid question, and qualitatively different from the things currently discussed in the paper. We've uploaded a new revision that adds a short discussion of this at the very end of the paper, pointing to two recent works that seem like promising starting points for crafting supervised feature dictionaries. For the purposes of this paper, we wanted to get the basic parts of the methodology right in a very controlled setting.\\n\\nIn particular, one of the works we added [1] proposes a way to create concept dictionaries that encode single-word concepts by computing vectors with supervision (e.g., by averaging token representations across sentences mentioning the word) and then decomposing activations using these vectors as an overcomplete basis, plus an L1 penalty encouraging sparsity. This is very similar to an SAE - except the decoder vectors are pre-computed and not learned. Some preliminary experiments by one of the authors indicate that these concept vectors are enough to explain a meaningful chunk of activation variance at satisfactory sparsity, but still a lot of variance remains unexplained. Still, this framework gives a scalable way to at least begin to tackle the question of \\n\\\"ground-truth\\\" attributes in open-ended settings. We think this is a very interesting area for future work.\\n\\n[1] Luo, Jinqi, et al. \\\"PaCE: Parsimonious Concept Engineering for Large Language Models.\\\" arXiv preprint arXiv:2406.04331 (2024).\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"Dear reviewer,\\n\\nThank you for your comments. We appreciate your focus on strengthening the paper's motivation and empirical validation.\", \"we_have_made_several_significant_changes_to_address_your_concerns\": [\"**Motivation and Novelty**: We have substantially reworked the introduction and related work sections to better motivate our contribution (see top-level comment). The key insight is simple: while SAEs are increasingly used for model interpretability, existing evaluations rely only on indirect metrics. Our work provides the first ground-truth evaluation framework using supervised features of the same mathematical form as SAEs.\", \"**Additional Empirical Results**: We have strengthened the empirical validation by:\", \"Adding a complete new case study applying our framework to a different model, dataset, and linguistic task (see the top-level comment on generalizability of results)\", \"Demonstrating that our supervised features successfully disentangle relevant task attributes in this new setting\", \"Showing how our evaluation methodology transfers with minimal changes\", \"**English Language**: We have carefully revised the manuscript for clarity and readability, with particular attention to:\", \"Technical explanations and terminology\", \"Flow between sections\", \"Motivation of methodological choices\", \"Please let us know if you would like us to clarify or expand on any of these changes. We remain available for questions throughout the discussion period.\"]}",
"{\"summary\": \"The authors propose a (principled) method allowing to create supervised dictionaries for space features, which allow for evaluating the degree of disentanglement of SAEs. The developed method is then applied to SAEs and LLMs, witnessing not only interpretable later variables, but also providing possibility of editing attributes. Metrics of sufficiency, necessity and control are used for this.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"In general the manuscript is well written and \\\"strong\\\".\\n\\nThe question which is really to ask is how much this finding is relevant for the literature. Here I should say my expertise is perhaps too limited to provide a proper judgment.\", \"weaknesses\": \"Though this has most probably costed a lot of work, the empirical validation of the proposed methodology is rather scarce. Whether the constructed dictionaries would also function for other tasks/semantics is not clear.\\n\\nThe mathematics are well explained, in clear and simple way.\\n\\nSome (not that many) parts of the manuscript I had to read several times, e.g., the title under Figure 1 or the paragraph on interpretability at the end of Section 3.2. But in general formulas aid much understanding.\", \"questions\": \"In the title of Figure 1, could you make a clear connection of the text with precise parts of visuals, to facilitate the understanding?\\n\\nTaking into account the size of the paragraph on related work, it should be possible to describe the related work without much terminology and thus shifting it closer to the beginning of the manuscript. This would allow the reader to better position the framework with respect to the state of the art.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a way to compute sparse feature dictionaries that disentangle model computations similar to an SAE fashion using a supervised method; as well as introduces several ways to evaluate these feature dictionaries from different perspective such as necessity, sufficiency and controls of the feature dictionaries towards a specific task. The author applies the work to the indirect object identification (IOI) using GPT-2 Small, and compare the feature dictionaries obtained by their method, vs some other recent SAE variants such as Gated SAE adn Top-K SAE.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Thorough study in the case of IOI. The use of IOI in the study mainly because we know the \\\"feature\\\" that more or less directly affect the outputs. This minimizes the possibilities of the cases that \\\"SAE features\\\" may not coincide with the features that human understands. This was also implied in Section 4.2.\", \"Extensive study on the paper that includes a lot of results in the appendix.\", \"Some proposed methods such as Necessity and sufficiency, to the control method proposed can be applied to a more general case, as described in Section 4.\", \"This paper also addresses a lot of the details on small nuances in evaluation of SAEs. This includes the discussion in Session 4.2, and the paragraph on \\\"Preventing...\\\" in Section 4.3.\"], \"weaknesses\": [\"**Overall**\", \"As this paper is a case study to IOI, it is somewhat restrictive. I don't think it is necessarily a weakness of the paper though as I don't think anything can be done to address this \\\"weakness\\\". It could be viewed as a stepping stone for future work as well.\", \"Presentation and Readability are the main weakness of this work. For example, experiment results in Section 3 includes a lot in section 4 such as description of FullDistribution SAE, Task SAE, TopK, Gated SAE etc. There are crucial formulae and methods that should be in the main text, but instead they are in the Appendix. For instance, the necessity formula in Section 7.4 ~line 878 in the appendix.\", \"The section on \\\"Interpretability\\\" such as in Section 3.2, Figure 5 and Section 4.4 are out of place. The author even mentioned that \\\"we stress that this experiment is of a rather exploratory and qualitative nature\\\". Putting these sections in the appendix would be much more coherent.\", \"Section 6 on discussions and limitations is too short. There should be more discussions on the experiment results. Possibilities of applying similar techniques to a general settings (as IOI provides the \\\"known features\\\"). Including a discussion on future work may be useful.\", \"**Detailed issues**\", \"On Sufficiency in \\\"Figure 3. Left\\\" - as the experiment is to test whether the reconstructions are sufficient, we would hope to compare the logit difference of the original and the reconstruction. The method in Figure 3 shows the ratio of the *average* logit difference with and without the intervention. This may not be the best because the averaging may hide the changes in logit difference with the intervention for each example. A simpler method like the average changes (absolutized) of the logit difference _may_ work. The authors can also opt to include a brief discussion on this so that it does not seem like they were hiding something. For example, showing the distribution of absolute changes of the logit difference in a histogram, or some statistics on it.\", \"Necessity. The experiment is to test whether the reconstructions are necessary. This means that we want to show that without the reconstructions, the model cannot do so well, resulting a drop in model performance - i.e. logit difference. Thus there are three quantities\", \"1) The reconstruction $\\\\hat{a}(p)$\", \"2) The proposed quantity; average plus SAE error term\", \"3) The average.\", \"Showing necessity should be showing the difference between 1) and 2) are large. However, in the main text of the paper, it opts to show the difference between 2) and 3) only. A crucial formula and description of the necessity score directly addressing this problem is in the appendix (Section 7.4, around line 878), which in my opinion, should be in the main text.\", \"\\\"Probing accuracy\\\" in \\\"Control\\\": Seems out of place? Is it referenced somewhere else in the paper? Also no results were shown? In the absence of results, this section on probing accuracy does not seem to achieve the goal of the section: \\\"measures the degree to which the supervised feature dictionaries disentangle the different attributes\\\". This is because the probe is linear, which itself can \\\"disentagle\\\" the attributes. I think it would be better to either remove this section (put this in the appendix), or show some experiment results with discussions on the way to disentangle the attributes.\", \"**Minor**\", \"Section 4.3 Expressing... line ~361. edit 3 --> (3) or Equation 3.\", \"Results (line ~424). objective 4 --> (4) or Equation 4.\", \"missing parenthesis at line 434 (resp. $a(p_t)$ **)** by their...\", \"broken references in appendix (line 883, 935, 1849)\"], \"questions\": [\"Equation 2. Reason for the \\\"bias\\\" term is $E[a(p)]$? Does this mean $E[u_{IO}] + E[u_S] + E[u_{POS}] \\\\approx 0$ if we take the expected value on $a(p)$ and $\\\\hat{a}(p)$? If it is by design, can we make a comment on this design? What is a brief explanation behind this design?\", \"What is F and A in F1 Score? In the text, it seems F=the set of examples activating a specific SAE latent \\\"f\\\". A=binary attribute of a prompt. It seems that the F1 Score is applied **on each SAE latent**, as described in Section 4. How do we get \\\"A\\\" in this case? How do we know which \\\"binary\\\" attribute of a prompt that the latent f corresponds to? Can we give a more detailed explanation in the text? It would be nice to include some examples of F and A (specifically A)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1NevL7zdHS | Revisiting Mode Connectivity in Neural Networks with Bezier Surface | [
"Jie Ren",
"Pin-Yu Chen",
"Ren Wang"
] | Understanding the loss landscapes of neural networks (NNs) is critical for optimizing model performance. Previous research has identified the phenomenon of mode connectivity on curves, where two well-trained NNs can be connected by a continuous path in parameter space where the path maintains nearly constant loss. In this work, we extend the concept of mode connectivity to explore connectivity on surfaces, significantly broadening its applicability and unlocking new opportunities. While initial attempts to connect models via linear surfaces in parameter space were unsuccessful, we propose a novel optimization technique that consistently discovers Bézier surfaces with low-loss and high-accuracy connecting multiple NNs in a nonlinear manner. We further demonstrate that even without optimization, mode connectivity exists in certain cases of Bézier surfaces, where the models are carefully selected and combined linearly. This approach provides a deeper and more comprehensive understanding of the loss landscape and offers a novel way to identify models with enhanced performance for model averaging and output ensembling. We demonstrate the effectiveness of our method on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets using VGG16, ResNet18, and ViT architectures. | [
"mode connectivity",
"Bézier surfaces",
"loss landscape",
"deep learning"
] | Accept (Poster) | https://openreview.net/pdf?id=1NevL7zdHS | https://openreview.net/forum?id=1NevL7zdHS | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ztjAH3pYHb",
"qaY2e1tPCc",
"ovS6oWcQp9",
"oLpeENamS6",
"kxrHm9Ox3s",
"hk51hI7I7n",
"g18zIZ1AaF",
"aQSzOgfYk8",
"RDiRQsCnmM",
"QnPI6wseJf",
"LRmRI0xF6n",
"Kb1r76GAxO",
"KWDKjSXfnx",
"Dxg6mq9xsx",
"B3IPO3K5We",
"8HV4qTlmvP",
"6JjwAO6WSZ",
"5maGM8RVlo",
"3pyJX3b0gz",
"3EjEYXI3Cz",
"0RemxSJpyN"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730461100796,
1732632768437,
1732591885132,
1732551834049,
1732262190224,
1730689083505,
1737523940423,
1730643794333,
1732263123131,
1732633724366,
1734501765385,
1732516088184,
1732592355510,
1732259864602,
1733023016375,
1732593252718,
1732262913606,
1732592434385,
1732261413992,
1733025580014,
1732261512911
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8888/Reviewer_3hmG"
],
[
"ICLR.cc/2025/Conference/Submission8888/Reviewer_b47E"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Reviewer_3hmG"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Reviewer_LRVF"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8888/Reviewer_b47E"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Area_Chair_JP5R"
],
[
"ICLR.cc/2025/Conference/Submission8888/Reviewer_LRVF"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Reviewer_LRVF"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8888/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"The paper explores the concept of mode connectivity in neural network loss landscapes, expanding it from traditional curve-based connections to surface-based connections. This approach offers a comprehensive way to merge models, enabling applications such as model averaging and output ensembling.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tExtending mode connectivity from curves to B\\u00e9zier surfaces is a significant topic.\\n2.\\tThe proposed method is sound.\\n3.\\tWriting is good to follow.\\n4.\\tThe figure illustration is good in this paper\", \"weaknesses\": \"1.\\tOnly evaluate the performance on small datasets. Large datasets like image-net should be included.\\n2.\\tLack of theoretical analysis.\", \"questions\": \"See the above weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"I thank the authors' comprehensive responses and they mostly resolved my questions.\\nI will raise my score.\"}",
"{\"title\": \"Response to follow-up questions (Part 1)\", \"comment\": \"Thank you for your follow-up question! We\\u2019re happy to address your concerns and provide further clarification.\\n\\n> ### **Q1: I still don't fully see why [1,2] fundamentally differ from this work. All of them seem to construct some kind of spaces of low loss, albeit with different parametrisations (simplexes, wedges, Bezier). Other than the parametrisation, is there another fundamental difference that I'm missing?**\\n\\nWe appreciate the reviewer's thoughtful comments and would like to leverage the opportunity to further clarify the fundamental differences between our work and the referenced methods, [1] and [2]. While it is true that all approaches aim to explore spaces of low loss in the parameter space, the methods are fundamentally distinct in terms of their construction, optimization, and underlying mathematical properties. Below, we highlight the main differences.\\n\\n**1. Scope of Exploration**\\n\\n- Our Work (B\\u00e9zier Surfaces): Extends mode connectivity to **continuous and smooth** **surfaces** in a **non-linear manner**, exploring a broader and higher-dimensional parameter space compared to curves or piecewise-linear structures. This enables richer structural insights and access to more diverse models with low loss. The nonlinear nature of B\\u00e9zier surfaces defined by two parametric directions also ensures better flexibility in capturing the curvature of the loss landscape. Our method also scales well when the dimensionality increases, while remaining **straightforward to visualize and interpret the loss landscape**.\\n- Simplicial Complexes [1]: Focuses on connecting modes via **discrete simplices**, restricting exploration to **localized, piece-wise linear** approximations. The resulting surfaces are more akin to linear mode connectivity surfaces, as discussed in our paper, with limited ability to capture nonlinear behaviors or global curvature. Additionally, **visualizing the loss landscape of simplicial complexes becomes challenging** in high-dimensional spaces, particularly as the simplicial structure grows in complexity.\\n- Wedges [2]: Primarily a framework modeling **linear interpolation** of manifolds with sharp transitions, providing limited exploration outside the defined wedge intersections. In addition, as dimensionality increases, accurately characterizing n-wedges and intersections becomes increasingly challenging and prone to errors. It is also **difficult to visualize the loss landscape** in high-dimensional spaces using Wedges. Furthermore, the framework introduces **additional layers of abstraction** (e.g., long directions, short directions), which may complicate its interpretability.\", \"beyond_parametrization\": \"The optimization dynamics and practical applicability of our method differ significantly. B\\u00e9zier surfaces **support a global optimization process, do not have linearity constraints, and do not require consideration of conflicting objectives during learning**.\\n\\n````\\nThis response continues in the second half below.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your rebuttal. My concerns have been addressed and I'd like to keep my score.\"}",
"{\"title\": \"Response (Part 1)\", \"comment\": \"Thank you for acknowledging the strengths of our paper, including:\\n\\n1. Introducing a novel method for extending mode connectivity to two-dimensional B\\u00e9zier surfaces.\\n2. Demonstrating the effectiveness of our approach through clear visualizations and experiments across various architectures (VGG16, ResNet18, ViT) and datasets (CIFAR-10, CIFAR-100, Tiny-ImageNet).\\n3. Organizing the paper in a well-written and accessible manner, supported by illustrative plots that facilitate understanding.\\n\\nBelow, we address your concerns in detail.\\n\\n> ### **Premature Claim of \\\"Low-Loss Curve\\\" in Line 161 (Bottom of Page 3)**. \\n\\n\\nThank you for pointing this out. This is indeed a typo, and we have corrected it to: \\\"B(t) denotes a curve in the parameter space.\\\"\\n\\n\\n\\n>### **Ambiguity in the Central Question About \\\"Both Low Loss and High Accuracy\\\" (Line 195)**. \\n\\nThank you for highlighting the ambiguity. The central question on line 195 refers to the challenge of finding a surface in parameter space that maintains low loss on the training set while achieving high accuracy on the test set, emphasizing generalization. We will revise the phrasing to explicitly state:\\n\\n**\\\"How can we identify a surface in parameter space that achieves low training loss and high test accuracy?\\\"**\", \"this_revision_clarifies_our_primary_finding\": \"B\\u00e9zier surfaces enable this balance by providing continuous low-loss regions in the parameter space, preserving training loss while leveraging the alignment between loss valleys and accuracy peaks for strong generalization. Thank you for the opportunity to refine this key point.\\n\\n\\n\\n>## **Rationale for Defining $q_{uv}$ in Equation 8 and Substitution with Uniform Distribution**. \\n\\nThank you for raising this question, as it provides an opportunity to clarify the reasoning behind our formulation and approximation.\\n\\n1. **Theoretical Motivation for $q_{uv}$:**\\n $q_{uv}$ represents the normalized density of points on the B\\u00e9zier surface, weighted by the gradient of the parameterized surface. This density accounts for the varying distribution of points across the surface, ensuring the loss integral reflects the true geometric properties of the surface.\\n\\n2. **Challenge with Direct Use of $q_{uv}$:**\\n Direct computation of $q_{uv}$ is intractable for stochastic gradient-based optimization because it relies on the gradients of the parameterization $\\\\phi_\\\\theta(u, v)$, where $\\\\phi_\\\\theta(u, v)$ depends on learned parameters $\\\\theta$.\\n\\n3. **Substitution with Uniform Distribution for Practical Optimization:**\\n\\n **As stated above Equation (9) in the main paper, we introduce a surrogate loss to ensure the optimization remains tractable.** Inspired by prior works on curve-based mode connectivity [1], we approximate $q_{uv}$ with a uniform distribution over the parameter space $[0,1] \\\\times [0,1]$. This simplifies the loss function to:\\n $$\\n \\\\mathcal{L}(\\\\theta) = \\\\int_{0}^{1} \\\\int_{0}^{1} L(\\\\phi_\\\\theta(u, v)) \\\\, du \\\\, dv,\\n $$\\n\\n where $u, v \\\\sim U(0,1)$, the uniform distribution on $[0,1]$.\\n\\n4. **Benefits of Approximation:**\\n \\n **Computational Efficiency:** The uniform distribution avoids dependence on the gradients $\\\\phi'_\\\\theta(u, v)$, allowing for efficient sampling-based optimization.\\n\\n **Empirical Robustness:** As shown in our experiments, this approximation does not significantly degrade performance. The constructed surfaces consistently maintain low loss and achieve high accuracy across architectures and datasets.\\n\\nWe added more explanation of this choice in the manuscript to clarify its theoretical motivation and practical implications.\\n\\n\\n\\n>### **Lack of Distance Quantification Between Corner Control Points in Loss Landscape Visualizations**. \\n\\nThank you for your observation. We provide the following clarifications:\\n\\n1. **Diversity of Control Points:**\\n\\n In our experiments, the four corner models are trained independently and reside in different basins, making them incompatible with simple linear interpolation for linear mode connectivity. This ensures our approach explores diverse regions of the parameter space. Our method is specifically designed to connect models with diverse initializations, varying training settings, and different augmentations.\\n\\n2. **Special Case of Similar Models:**\\n\\n Your concern aligns with a specific scenario discussed in the paper under \\\"Existence of Linear Surface Mode Connectivity.\\\" In cases where corner models are highly similar and satisfy linear mode connectivity, our method constructs a low-loss surface without requiring additional optimization. This behavior demonstrates the generality of our method.\\n\\nWe have expanded on this discussion in the revision to highlight these scenarios more explicitly.\\n\\n```\\nThis response continues below.\"}",
"{\"summary\": \"This paper investigates connecting multiple models in parameter space through constructing appropriate surfaces. It is well-known that a simple linear hyperplane does not suffice and non-linear methods are needed. To that end, the authors propose using B\\u00e9zier surfaces, where four points are used to represent the model parameters and nine other points in the parametrization are subsequently optimized such that uniformly sampled surface points also have low loss. The authors show that they can construct surfaces exhibit low loss everywhere, thus succesfully connecting multiple models with a single surface. They further show that the best point on the surface outperforms all the individual models and can thus be used to merge several models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is very well-written and clearly explains the problem as well as the techniques that aim to solve them. I like that the proposed method simply consisting of B\\u00e9zier curves remains rather simple.\\n2. The experiments performed show quite convincingly that the proposed method succeeds in connecting multiple minima. The authors also investigate a variety of architectures, making the results stronger.\", \"weaknesses\": \"1. There seem to be quite a few related works missing that also explore the construction of surfaces to connect multiple minima [1, 2, 3, 4, 5]. The authors definitely need to add the listed papers to the related works and clearly articulate how theirs is different and what advantages it provides.\\n2. For model merging, the authors do not seem to compare against any other method? It would be interesting to understand whether this technique allows one to leverage the diversity from all the points (that were obtained using different inits and shuffling). Standard merging always needs to be careful to end up in the same basin, and thus diversity of the points seems naturally reduced. Similarly for the output ensembling experiments, the obvious baseline of solely ensembling the four end points is missing. Does the surface really provide diversity beyond those four points? This is currently unclear with the provided experimental results.\\n3. I think taking the best performing point on the entire surface is (1) a bit an unfair comparison and (2) very expensive to do as a dense grid of models needs to be evaluated on the test set. I think it would be more appropriate and efficient to compare against some sort of \\u201cmean\\u201d value on the surface. Does a B\\u00e9zier curve admit a natural \\u201ccentroid\\u201d? If yes, how does that one perform compared to the individual models? \\n4. Another related work for model merging is [6] which explored how a given ensemble can be constructed within the same convex region, and thus also allowing to average weights while still profiting from diversity. It would be interesting to understand which approach works better. \\n\\n\\n[1] Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling, Benton et al., 2021\\n\\n[2] Large Scale Structure of Neural Network Loss Landscapes\\n\\n[3] Loss landscape sightseeing with multi-point optimization, Skorokhodov et al., 2019\\n\\n[4] A deep neural network\\u2019s loss surface contains every low-dimensional pattern, Czarnecki et al., 2019\\n\\n[5] Examining the geometry of neural mode connecting loss subspaces, Chen et al. \\n\\n[6| How good is a single basin? Lion et al., 2023\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"This paper explores extending the concept of \\\"mode connectivity\\\" in neural networks from one-dimensional paths to two-dimensional surfaces using B\\u00e9zier surfaces. Traditionally, mode connectivity demonstrates that two trained models can be connected by a low-loss path in parameter space. Here, the authors introduce a novel method to connect multiple models on a smooth, low-loss surface, broadening the potential for optimization and generalization. They detail an algorithm that constructs and optimizes B\\u00e9zier surfaces to maintain low loss and high accuracy across various architectures (VGG16, ResNet18, ViT) and datasets (CIFAR-10, CIFAR-100, Tiny-ImageNet). The study finds that nonlinear surfaces outperform simple linear interpolations, especially for model averaging and ensembling applications, ultimately enhancing performance in tasks like model merging and ensemble accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and well-organized. It is easy to read.\\n2. The visualizations and plots are also very clear, facilitating the understanding.\", \"weaknesses\": \"1. **Premature Claim of \\\"Low-Loss Curve\\\" in Line 161 (Bottom of Page 3)**.\\nIn line 161, the paper refers to a \\\"low-loss curve\\\" as if it were already established, though at this point, neither the method nor specific criteria for determining a \\\"low-loss\\\" property have been introduced. Could the authors clarify what they mean by \\\"low-loss\\\" here and either postpone this claim until it is better supported or define it explicitly at the outset? Additionally, grounding this concept with a preliminary explanation or notation would improve clarity.\\n\\n\\n2. **Rationale for Defining $q_{uv}$ in Equation 8 and Substitution with Uniform Distribution**.\\nThe definition of $q_{uv}$\\u200b in Equation 8 lacks an explanation of its theoretical motivation and why it can be replaced by a uniform distribution for practical purposes. What are the specific benefits of this choice, and how does this approximation impact the accuracy or reliability of the surface mode connectivity in experiments? A deeper rationale for this formulation would clarify its role in the model's performance.\\n\\n\\n3. **Lack of Distance Quantification Between Corner Control Points in Loss Landscape Visualizations**.\\nThe visualizations of the loss landscapes do not quantify or highlight the parameter distances between corner points (control points). If these control points represent very similar models with minor parameter variations, the diversity of the parameter space explored may be limited, especially when these models are trained under comparable conditions. How would the approach fare with intentionally diverse model initializations, varying training settings, or other augmentations? Such differences could test the robustness of the surface connectivity under broader training conditions.\\n\\n4. **Limited Impact of Experiments and Marginal Gaps in Results (e.g., Table 1)**.\\nThe experimental evaluation relies primarily on relatively small datasets like CIFAR-10, CIFAR-100, and Tiny-ImageNet, which may limit the generalizability of the findings to larger, more complex datasets. Additionally, Table 1 shows only marginal improvements between the baseline and the model merging or ensembling results. Could the authors address how these findings might scale to larger datasets and discuss the significance of these marginal gaps, particularly given the computational overhead involved in the proposed approach? Expanding on the implications for practical, large-scale applications would enhance the impact of these results.\", \"questions\": \"1. **Ambiguity in the Central Question About \\\"Both Low Loss and High Accuracy\\\" (Line 195)**.\\nThe central question on line 195 could benefit from greater specificity regarding \\\"low loss\\\" and \\\"high accuracy.\\\" Are the authors referring to training loss and testing accuracy? Given the generalization gap, distinguishing training and testing here would provide meaningful context. If both metrics are from the same set (either training or testing), the statement may be redundant, as low loss often correlates with high accuracy on that set. Specifying if this is about generalization (low training loss translating to high testing accuracy) could substantiate the relevance of this question.\\n\\n2. **Scalability Concerns for Optimization of Many Parameters (\\u03b8) in Equation 6**.\\nEquation 6 implies a potentially extensive optimization of numerous control points (\\u03b8 values) across the B\\u00e9zier surface. This approach seems computationally heavy, especially for large models with millions of parameters. Could the authors discuss the scalability of this optimization? Is there any strategy to reduce the computational load or parameterize this approach efficiently to make it viable for larger architectures?\\n\\n3. **Justification for Selecting Models from Specific Epochs (Figure 6)**.\\nFigure 6 shows models chosen from epochs 220, 200, 180, and 160. However, it\\u2019s unclear why these specific epochs were selected or why only a single training trajectory was used. Would models from other epochs, or from different training trajectories, produce similar results? Providing a rationale for these choices or showing comparative results could help validate the generalizability of this selection process.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response (Part 3)\", \"comment\": \"```\\nContinuing from the second part of this response\\n```\\n> ### **Limited Impact of Experiments and Marginal Gaps in Results (e.g., Table 1)**. \\n\\n\\nWe appreciate your suggestion to include larger datasets like ImageNet, but we would like to clarify the rationale behind our experimental setup:\\n\\n**Established Practice in Mode Connectivity Research**: The datasets used in our paper are consistent with those employed in existing methods. The seminal paper *\\\"Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs\\\"* [1] focused solely on CIFAR-10 and CIFAR-100 datasets. This trend has continued in later works like *\\\"Layer-wise Linear Mode Connectivity\\\"* [2] and even recent ICLR 2024 submissions. Following this precedent, we included Tiny ImageNet, which is considered a relatively larger dataset in this research area.\\n\\n\\n> ### **Table 1 shows marginal improvements in model merging/ensembling results. Could the authors address scalability to larger datasets and the significance of these gaps, given the computational overhead?**\\n\\nThank you for your observation regarding the marginal improvements in Table 1. While model merging is not the primary focus of our paper, we only consider it potential as one of the practical applications of surface mode connectivity. There are several possible avenues to enhance model merging performance based on our approach:\\n\\n1. Increasing Training Epochs:\\n We find that extending the training duration can improve the performance of the models on the surface, as longer training epochs allow for better optimization of the parameters and refinement of the low-loss regions. This, in turn, can enhance the merging outcomes. Although not significant, we find increasing epoch in our phase 3 of training can further improve the generalization ability on our Bezier Surface.\\n2. Leveraging Multiple Surfaces:\\n Combining points from multiple B\\u00e9zier surfaces, rather than relying on a single surface, offers an additional degree of freedom to improve model merging. This approach could further exploit the diversity captured across different surfaces, potentially leading to enhanced performance.\\n\\nWe would like to mention that Our main finding is providing a mathematical frame work to construct a surface that maintains low test loss and test accuracy across multiple models, and model merging is only one potential application. While these strategies lie beyond the current scope of our study, they represent promising directions for future exploration to scale our findings to larger datasets and more complex tasks. We appreciate your suggestion and will incorporate a discussion of these possibilities in the revised manuscript**.**\\n\\n\\n\\n**References:**\\n\\n[1] Garipov, Timur, et al. \\\"Loss surfaces, mode connectivity, and fast ensembling of dnns.\\\" Advances in neural information processing systems 31 (2018).\"}",
"{\"title\": \"Appreciation for Your Detailed Feedback\", \"comment\": \"Thank you for your thoughtful feedback and for raising your score! We greatly appreciate your input and are glad our responses addressed your questions. We are confident our work contributes meaningful insights and new perspectives on the deep learning loss landscape.\"}",
"{\"metareview\": \"The paper extends mode connectivity in neural network loss surfaces to two-dimensional Bezier surfaces. Specifically, the authors extend the observations and methodology of [1] from training 1-dimensional Bezier curves to 2-dimensional Bezier surfaces. They first define a loss function for training the surface with 4 fixed corner points. Then, they develop a method for optimizing the loss where they first fit the edges of the surface and then fit the inner part. The authors show that the proposed method finds surfaces with low train loss and high test accuracy across multiple architectures and image classification datasets. They also provide results on ensembling of points within the surface and model merging.\", \"strengths\": [\"The paper is well-written\", \"The proposed methodology is sound\", \"The method works well and achieves the goal set by the authors\"], \"weaknesses\": [\"The paper is a pretty direct extension of the observations and methodology in [1]. Also, [2] has previously demonstrated that generally mode connectivity holds with high-dimensional connecting manifolds. So the observations and insights are not completely novel. The authors argue that [2] is qualitatively different as it constructs a locally-linear surface; however, the full surface is still non-linear, and it's not clear why a smooth Bezier surface is a major improvement over a simplicial complex.\", \"The experiments are conducted on smaller datasets, same as [1]. It is worth noting that [1] was written in 2018, and the standard for empirical studies should be higher now. Having experiments at ImageNet scale would be good.\", \"It would be nice to see results of merging models trained on different tasks, e.g. different subsets of data, where the merging is actually beneficial. The current model merging results are more of a proof-of-concept and not a practical improvement.\"], \"decision_recommendation\": \"Despite the limitations, the paper provides new results on mode connectivity, the methodology is sound and the presentation is strong. I recommend accepting the paper, but I also suggest that the authors should add more discussion of differences with [2] and the other relevant papers highlighted by reviewers in the final version of the paper.\\n\\n[1] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs\\nTimur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P. Vetrov, Andrew G. Wilson\\n\\n[2] Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling\\nGregory Benton, Wesley Maddox, Sanae Lotfi, Andrew Gordon Gordon Wilson\", \"additional_comments_on_reviewer_discussion\": \"The reviewers unanimously recommend accepting the paper. Reviewers engaged with the rebuttal from the authors, and 2 of them raised their scores based on the rebuttal. The reviewers had concerns about the relationship to prior work, as well as requested additional details on the method and results. The reviewers were satisfied with the responses from the authors.\"}",
"{\"title\": \"Response\", \"comment\": \"I thank the authors for engaging with my feedback and providing responses!\\n\\n**Related works:** I still don't fully see why [1,2] fundamentally differ from this work. All of them seem to construct some kind of spaces of low loss, albeit with different parametrisations (simplexes, wedges, Bezier). Other than the parametrisation, is there another fundamental difference that I'm missing?\\n\\n**Ensembles:** The gains from ensembling seem surprisingly low (just 0.4%) in case of VGG. If I understand correctly, the optimal point on the Bezier curve outperforms this ensemble by 0.3% (achieving 90.7%), which would be a very cool result. Do you observe more such improvements? Or am I confusing models here?\\n\\n**Train loss for selection:** Are training loss numbers available as a byproduct of training for all the points, towards the end if training especially? Otherwise one would need to do a costly grid evaluation again on the training set, no?\"}",
"{\"title\": \"Response to follow-up questions (Part 2)\", \"comment\": \"```\\nContinuing from the first part of this response\\n```\\n\\n> ### **Ensembles: The gains from ensembling seem surprisingly low (just 0.4%) in case of VGG. If I understand correctly, the optimal point on the Bezier curve outperforms this ensemble by 0.3% (achieving 90.7%), which would be a very cool result. Do you observe more such improvements?**\\n\\nThank you for the observation and the opportunity to clarify. The highest accuracy point on our B\\u00e9zier surface indeed outperforms the four-corner ensemble by 0.3%, highlighting the additional diversity and optimization potential provided by the surface. We further demonstrate that additional information can be obtained across the entire surface, contributing to improved generalization, with a 1.6% accuracy boost in surface-based ensembling compared with ensembling with only four corner models. This comparison highlights the advantage of ensembling all sampled models from the surface versus relying solely on the four corner points.\\n\\n**Do you observe more such improvements?**\\n\\nYes, we observe similar results with ResNet18, where the highest accuracy point on the surface surpasses the ensemble of the four corner models. This aligns with the observation for VGG16, where the highest point on the surface outperforms the four-corner ensemble.\\n\\nAdditionally, as mentioned above, surface-based ensembling achieves an accuracy boost of approximately 2.4% compared to the average accuracy of the corner models and 1.6% compared to ensembling only the four corner models. \\n\\n**Reasoning Behind the Improvement:**\\n\\nEvery point on the B\\u00e9zier surface represents a local minimum discovered through non-linear optimization. By updating the control points in parameter space, we construct a bezier surface covered by diverse minimum points distinct from the initial corner models. This diversity allows the surface to capture additional information beyond the four corner points, helping identify solutions with better performance and improved generalization in a nonlinear manner.\\n\\n> ### **Train loss for selection: Are training loss numbers available as a byproduct of training for all the points, towards the end if training especially? Otherwise one would need to do a costly grid evaluation again on the training set?**\\n\\nThank you for raising this question. Below, we address your concerns about evaluating the training loss on the surface:\\n\\n**Using Batch-Level Approximation for Training Loss Surface:**\\nDuring training, approximately 80 points are sampled per batch, covering most of the regions on the surface. We observed that even evaluating the losses for the last few batches serves as a reliable approximation of the full training loss on the surface. This consistency suggests that the loss values from a smaller subset of batches can effectively represent the overall surface, potentially reducing computation costs without significant accuracy loss. If further evaluation is desired after training, we propose estimating the loss surface using only a subset of the training data. This method provides a good approximation of the loss over the entire training dataset, enabling efficient evaluation while maintaining reliability. \\n\\nIn fact, in our experiment on CIFAR-10 using the VGG16 architecture, we evaluated the training loss surface using a subset of data equivalent to four epochs, it revealed that the peaks and valleys identified closely align with those obtained using the full training dataset, further confirming the robustness of this approximation. In our experiment, when evaluating the full training dataset, the valley of the loss surface was located at the (u, v) pair (0.9, 0.2). When evaluated on a subset of the dataset, the valley shifted slightly to the (u, v) pair (0.9, 0.1). However, both points lie within the same low-loss region, indicating consistency in the surface's overall structure. Similarly, the peak of the loss surface remains consistently located at the (u, v) pair (0.4, 1.0) for both the full dataset and the subset evaluation. We have included these comparisons and the corresponding findings in the revised version of the paper to further substantiate the evaluation methods and results.\\n\\n```\\nThis response continues in the third part below.\"}",
"{\"comment\": \"Thank you for recognizing the key strengths of our work, including:\\n\\n1. Extending mode connectivity from curves to surfaces, enabling a deeper exploration of the loss landscape.\\n2. Proposing a sound and efficient algorithm for constructing Be\\u0301zier surfaces.\", \"we_address_your_concerns_as_follows\": \"> ### **Only evaluate the performance on small datasets. Large datasets like image-net should be included.**\\n\\n\\nWe appreciate your suggestion to include larger datasets like ImageNet, but we would like to clarify the rationale behind our experimental setup:\\n\\n- **Established Practice in Mode Connectivity Research**: The datasets used in our paper are consistent with those employed in existing methods. The seminal paper *\\\"Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs\\\"* [1] focused solely on CIFAR-10 and CIFAR-100 datasets. This trend has continued in later works like *\\\"Layer-wise Linear Mode Connectivity\\\"* [2] and even recent ICLR 2024 submissions. Following this precedent, we included Tiny ImageNet, which is considered a relatively larger dataset in this research area.\\n- **Supplementary Studies**: To further validate our method, we conducted additional experiments on surfaces with 4\\u00d74 and 5\\u00d75 control points using CIFAR-10. These setups represent more complex configurations. The results show that even with increased complexity, Be\\u0301zier surfaces consistently maintain low-loss and high-accuracy properties across all configurations (3\\u00d73, 4\\u00d74, and 5\\u00d75 control points).\\n\\n| **Control Points** | **Dataset** | **Avg acc of four corner models** | **Avg surface accuracy** | **Highest acc from sampled surface** |\\n| ------------------ | ----------- | :-------------------------------: | :----------------------: | :-----------------------------------: |\\n| 3\\u00d73 | CIFAR-10 | 80.2 | 79.7 | 82.0 |\\n| 4\\u00d74 | CIFAR-10 | 80.2 | 80.0 | 82.4 |\\n| 5\\u00d75 | CIFAR-10 | 80.2 | 80.3 | 82.9 |\\n\\nThese findings demonstrate the robustness of our method, providing confidence in its ability to scale to more complex datasets if needed.\\n\\n\\n\\n> ### **Lack of Theoretical Analysis**\\n\\n**Our primary contribution is the development of a mathematical framework for B\\u00e9zier surface-based mode connectivity, supported by empirical demonstrations of its effectiveness and its utility in exploring the loss landscape.** While we acknowledge the value of theoretical analysis, **it is worth noting that most breakthroughs in mode connectivity started with empirical observations, with theoretical guarantees developed later.** For example:\\n\\n- **Mode Connectivity (Curves)**: Discovered empirically in 2017 [1], rigorous proofs emerged for simple architectures like two-layer ReLU networks [3] in 2019.\\n- **Linear Mode Connectivity**: Published empirically in 2018 [4], formal theoretical results appeared in 2023 [5].\\n\\nSimilarly, our work establishes a new empirical property of Be\\u0301zier surfaces: their ability to connect neural networks with low-loss, high-accuracy regions. We believe this is a critical first step, paving the way for future theoretical exploration. While a theoretical guarantee would be a welcome addition, we feel that such an undertaking is beyond the scope of this paper.\\n\\n\\n\\n**References**\\n\\n[1] Garipov et al., *Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs*, 2018.\\n\\n[2] Skorokhodov et al., *Layer-wise Linear Mode Connectivity*, ICLR 2024.\\n\\n[3] Arora et al., *Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets*, 2019.\\n\\n[4] Frankle et al., *Linear Mode Connectivity and the Lottery Ticket Hypothesis*, 2018.\\n\\n[5] Zhao et al., *Proving Linear Mode Connectivity of Neural Networks via Optimal Transport*, 2023.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your response and in particular for patiently explaining the differences between your method and the related works. I am more convinced now regareding the novelty of the outlined approach! I have increased my score accordingly.\"}",
"{\"title\": \"Thanks for the review and further discussion is welcomed\", \"comment\": \"Thank you for your thoughtful review and for taking the time to evaluate our work. We believe our approach contributes meaningful novelty to the field and provides new insights into the deep learning loss landscape.\\nWe welcome any additional feedback or suggestions to further improve our work.\"}",
"{\"title\": \"Response (Part 2)\", \"comment\": \"```\\nContinuing from the first part of this response\\n```\\n\\n> ### **Justification for Selecting Models from Specific Epochs (Figure 6)**. \\n1. **Selection Rationale:**\\n Models from epochs 220, 200, 180, and 160 were chosen to specifically demonstrate linear mode connectivity. This setup shows that, in such cases, our method can construct a low-loss surface without additional training.\\n2. **Generalizability:**\\n While these specific epochs were selected for illustration, models from other epochs or training trajectories would likely produce similar results, provided they also satisfy linear mode connectivity.\\n\\nWe added this rationale in the revision and included comparative results to validate the generalizability of this selection process.\\n\\n\\n\\n> ### **Scalability Concerns for Optimization of Many Parameters (\\u03b8) in Equation 6**. \\n\\nWe appreciate the reviewer\\u2019s concerns regarding the scalability of optimizing numerous control points ($\\\\theta$ values) across the B\\u00e9zier surface, especially for larger models with millions of parameters. Below, we address these concerns and share our findings on improving computational efficiency:\\n\\n1. Efficient Computation Design:\\n The optimization of the B\\u00e9zier surface scales well with model size under our designed alogirhm and has been successfully implemented on architectures like Vision Transformers (ViT), which have significantly larger parameter counts compared to simpler models like CNNs. This demonstrates the scalability of our method to modern architectures.\\n2. Experiments on more control points: \\n To further support our efficient computational design, we conducted additional experiments on surfaces with 4\\u00d74 and 5\\u00d75 control points using CIFAR-10. These setups represent more complex configurations. The results show that even with increased complexity, Be\\u0301zier surfaces consistently maintain low training loss and high test accuracy properties across all configurations (3\\u00d73, 4\\u00d74, and 5\\u00d75 control points).\\n\\n| **Control Points** | **Dataset** | **Avg acc of four corner models** | **Highest acc from sampled surface** | **Avg acc for the surface model** |\\n| ------------------ | ----------- | --------------------------------- | ------------------------------------ | --------------------------------- |\\n| 3\\u00d73 | CIFAR-10 | 80.2 | 82.0 | 79.7 |\\n| 4\\u00d74 | CIFAR-10 | 80.2 | 82.4 | 80.0 |\\n| 5\\u00d75 | CIFAR-10 | 80.2 | 82.9 | 80.3 |\\n\\n3. Layer-Specific Optimization for Efficiency:\\n To further improve efficiency, we conducted experiments where only a subset of the model\\u2019s layers were optimized, rather than the entire network. Specifically, we updated only the last convolutional layer and the fully connected layers in the Multi-Layer Perceptron (MLP) module. This approach significantly reduced the computational overhead while maintaining competitive accuracy.\\n Due to the time constraints, we validated this method using a simple convolutional neural network (CNN) architecture, consisting of three convolutional layers (kernel size 3, padding 1), max-pooling for downsampling, and a 256-dimensional fully connected hidden layer. The final layer matched the number of classes.\\n\\n \\n\\n| **Updated layers** | **Dataset** | **Avg acc of four corner models** | **Highest acc from sampled surface** | **Avg acc for the surface model** | **Time for the experiment** |\\n| ------------------------------------------- | ----------- | --------------------------------- | ------------------------------------ | --------------------------------- | --------------------------- |\\n| All layers | CIFAR-10 | 80.2 | 82.0 | 79.7 | 47 min for 26 epochs |\\n| Last convolutional layer and last MLP layer | CIFAR-10 | 80.2 | 80.3 | 70.3 | 15 min for 26 epoch |\\n\\nThe time experiment is conducted on one card 4090. Our experiments, conducted under limited epochs due to rebuttal time constraints, showed that optimizing only a few layers led to a small drop in accuracy while yielding significant efficiency gains. optimizing only the last few layers results in a small accuracy drop, while significantly reducing the computational cost, making the approach more viable for larger architectures and datasets.\\n\\n```\\nThis response continues below.\"}",
"{\"title\": \"Response to follow-up questions (Part 3)\", \"comment\": \"```\\nContinuing from the second part of this response\\n```\\n**Test Accuracy and its Approximation:**\\n\\nAdditionally, it\\u2019s worth noting that the grid search for the accuracy surface occurs only during the inference phase, making it far less computationally demanding compared to training. For a more refined view of the loss or accuracy landscape after training, a subset of test data can also be used for efficient evaluation. In our experiment with the same architecture and dataset mentioned above, on the subset test dataset, the accuracy surface showed a valley consistently located at the (u, v) pair (0.4, 0) across both the full dataset and the subset evaluation. The peak shifted slightly from the (u, v) pair (0.9, 0.2) to (0.9, 0.1), but both points remain within the same high-accuracy region, demonstrating the robustness of the surface structure.\\n\\n\\n\\nWe hope this response addresses your concerns and clarifies the unique contributions of our work. We welcome any further discussions to explore these distinctions in greater depth.\"}",
"{\"comment\": \"Thank you for recognizing the contributions of our work, particularly:\\n\\n1. Proposing a mathematically simple yet effective framework for extending mode connectivity to B\\u00e9zier surfaces.\\n2. Demonstrating through experiments that our method successfully connects multiple minima.\\n\\n**Our primary contribution lies in establishing a mathematical framework of B\\u00e9zier surface-based mode connectivity complemented by an efficient algorithm, empirically demonstrating its effectiveness, and proving its utility in exploring the loss landscape. We note that model merging and ensembling are presented as two potential applications of this connectivity, showcasing the versatility of our method**. Below, we address your concerns in detail:\\n\\n\\n\\n> ### **Several related works [1, 2, 3, 4, 5] on constructing surfaces to connect multiple minima are missing. The authors should include these in the related works and explain how their approach differs and its advantages.**\\n\\nWe appreciate your detailed feedback and suggestions. To clarify, our work explores mode connectivity in parameter space via B\\u00e9zier surfaces, distinguishing it from prior works in several ways: Papers [1] and [2] explore different settings from our work. Paper [1] examines low-loss volumes using multiple simplexes, while Paper [2] models the loss landscape as a collection of high-dimensional wedges. Both approaches differ fundamentally from our focus on surface-based mode connectivity. Papers [3] and [4] focus on identifying specific patterns within loss surfaces of neural networks. While paper [5] examines geometry across multiple loss subspaces, its focus is on pairwise mode connectivity and does not extend to constructing surfaces with provable low-loss properties. Our method uniquely emphasizes the exploration of mode connectivity through surfaces, providing insights beyond the scope of these prior works.\\n\\nWe included a comprehensive discussion of these comparisons in the revised version to clearly articulate how our approach advances the understanding of loss landscapes.\\n\\n\\n\\n> ### **The authors do not compare their method against other model merging approaches. Can the technique leverage diversity from different inits and shuffling, beyond what standard merging (restricted to the same basin) achieves? For ensembling, the baseline of solely ensembling four endpoints is missing. Does the surface provide additional diversity?**\\n\\nand \\n\\n> ### **Another related work [6] constructs ensembles within the same convex region, allowing weight averaging while retaining diversity. How does this method compare to the proposed approach?**\\n\\n\\n\\n- **Breaking Basin Constraints**: Traditional model merging methods rely on models residing in the same basin, limiting their applicability. Our method enables merging models even when they reside in different basins. Unlike [6], which operates under the constraint that models lie within a single basin, our method allows for connecting and merging models across basins. This enables a broader utilization of diverse models. \\n- **Impact of Basins Separation**: When corner models reside in different basins, traditional model merging methods, such as linear interpolation in parameter space, result in suboptimal accuracy. This is demonstrated in the left panel of Figure 5 in our main paper, which illustrates standard merging (due to the specific initialization of control points) with models from different basins. In this case, intermediate points show a significant drop in performance (~18.4%). In contrast, B\\u00e9zier surface connectivity consistently identifies stable low-loss regions, enabling effective model merging.\\n- **Experimental Results**: Following your suggestions, we compared ensembling across the surface versus ensembling only the four corner models. The results demonstrate the advantage of surface ensembling in capturing additional diversity, leading to superior accuracy:\\n\\n| **Model** | **Four-Corner Ensemble** | **Surface Ensemble** |\\n| ----------------- | ------------------------ | -------------------- |\\n| VGG16/CIFAR-10 | 90.4% | 92.0% |\\n| ResNet18/CIFAR-10 | 90.1% | 92.7% |\\n\\nWe added these additional results and discussions into the revised version paper to address this point more explicitly.\\n\\n```\\nThis response continues in the second half below.\\n```\", \"title\": \"Response (Part 1)\"}",
"{\"title\": \"Appreciation for Your Feedback\", \"comment\": \"Thank you again for your thoughtful feedback and for taking the time to review our response! We're delighted to hear that our explanations were helpful. We would be happy to engage in further discussions or address any additional questions you may have.\"}",
"{\"comment\": \"```\\nContinuing from the first half of this response\\n```\\n\\n> ### **Taking the best-performing point on the surface seems unfair and computationally expensive due to the need for dense grid evaluation. Would comparing against a \\\"mean\\\" value or natural centroid be more appropriate, and how would it perform relative to individual models?**\\n\\n\\nThank you for pointing out the need for clarification.\\n\\n- **The logic of choosing the best-performing point**: Although the B\\u00e9zier surface undergoes optimization, the selected point fundamentally serves as a merging point, seamlessly integrating the knowledge of the corner models. Unlike linear paths that directly interpolate in parameter space, the B\\u00e9zier surface facilitates a non-linear merging process across the surface. Each point on this surface can be interpreted as a merging point derived from the four corner models. Through the optimization of the surface, these points naturally emerge, reflecting an effective combination of the corner models' knowledge. Ultimately, the best-performing point on the B\\u00e9zier surface represents the optimal merging point, capturing the most effective integration of the models.\\n- **Loss-Accuracy Correlation for efficient search**: We argue that performing a dense grid search on the test set is unnecessary in practice. Our approach capitalizes on the strong correlation between the \\\"loss surface\\\" measured on the training set and the \\\"accuracy surface\\\" measured on the test set. This alignment allows us to identify high-performing models directly from the training loss landscape by selecting points with low loss. As shown in Figures 4(b) and 5(b), the valleys in the training loss surface correspond closely to the peaks in test accuracy, enabling the efficient selection of optimal models without extensive test set evaluations.\\n- **No Center in Besizer Surface**: B\\u00e9zier surfaces, as defined, do not have a natural centroid. However, by setting u=0.5 and v=0.5, we can obtain a point on the surface using the mean value of the parameters. The point we obtained still shows low loss and high accuracy on the surface, but it is not nessesaerily the best performance model we obtain on the surface. \\n\\nWe added the discussions in the revised version accordingly.\\n\\n\\n\\n**References**\\n\\n[1] Benton et al., *Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling*, 2021.\\n\\n[2] Fort & Jastrzebski, *Large Scale Structure of Neural Network Loss Landscapes*, 2019.\\n\\n[3] Skorokhodov et al., *Loss Landscape Sightseeing with Multi-Point Optimization*, 2019.\\n\\n[4] Czarnecki et al., *A Deep Neural Network\\u2019s Loss Surface Contains Every Low-Dimensional Pattern*, 2019.\\n\\n[5] Chen et al., *Examining the Geometry of Neural Mode Connecting Loss Subspaces*, 2023.\\n\\n[6] Lion et al., *How Good is a Single Basin?*, 2023.\", \"title\": \"Response (Part 2)\"}"
]
} |
1NYhrZynvC | Exact linear-rate gradient descent: optimal adaptive stepsize theory and practical use | [
"Yifan Ran"
] | Consider gradient descent iterations $ {x}^{k+1} = {x}^k - \alpha_k \nabla f ({x}^k) $.
Suppose gradient exists and $ \nabla f ({x}^k) \neq {0}$.
We propose the following closed-form stepsize choice:
\begin{equation}
\alpha_k^\star = \frac{ \Vert {x}^\star - {x}^k \Vert }{\left\Vert \nabla f({x}^k) \right\Vert} \cos\eta_k , \tag{theoretical}
\end{equation}
where $ \eta_k $ is the angle between vectors $ {x}^\star - {x}^k $ and $ -\nabla f({x}^k) $.
It is universally applicable and admits an exact linear convergence rate with factor $ \sin^2\eta_k $.
Moreover, if $ f $ is convex and $ L $-smooth, then $ \alpha_k^\star \geq {1}/{L} $.
For practical use, we approximate (can be exact) the above via
\begin{equation}
\alpha_{k}^\dagger = \gamma_0 \cdot \frac{ f({x}^k) - \bar{f}_0 }{\Vert \nabla f ( {x}^k ) \Vert^2 } ,
\tag{practical use}
\end{equation}
where $\gamma_0 $ is a tunable parameter; $ \bar{f}_0 $ is a guess on the smallest objective value (can be auto. updated).
Suppose $ f $ is convex and $ \bar{f}_0 = f ( {x}^\star ) $, then
any choice from $\gamma_0 \in (0,2] $ guarantees an exact linear-rate convergence to the optimal point.
We consider a few examples.
(i) An $ \mathbb{R}^2 $ quadratic program, where a well-known ill-conditioning bottleneck is addressed, with a rate strictly better than $ O(1/2^k) $. (ii) A geometric program, where an inaccurate guess $ \bar{f}_0 $ remains powerful.
(iii) A non-convex MNIST classification problem via neural networks, where preliminary tests show that ours admits better performance than the state-of-the-art algorithms, particularly a tune-free version is available in some settings. | [
"gradient descent",
"adaptive stepsize/learning rate",
"universal optimal choice",
"exact convergence rate"
] | Reject | https://openreview.net/pdf?id=1NYhrZynvC | https://openreview.net/forum?id=1NYhrZynvC | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vhFVeXRD4X",
"s3JDI0DKyQ",
"q9AAs87kr7",
"gjbLSVidj2",
"Nnx9h3ICkf",
"I4yccexC1D",
"CPNocADXMB",
"AFfYSjRDL5",
"8iYkDlogl8",
"0mYBxJ82Ry",
"0alr14Wz3J"
],
"note_type": [
"official_review",
"official_comment",
"decision",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review"
],
"note_created": [
1730691089688,
1732314608294,
1737523545573,
1730772131752,
1730614230306,
1731751215061,
1732621742613,
1732620742380,
1729786801204,
1733110170548,
1734050257710
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_Les3"
],
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_LAqD"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_LAqD"
],
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_szwm"
],
[
"ICLR.cc/2025/Conference/Submission2962/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_1n9o"
],
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_szwm"
],
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_1n9o"
],
[
"ICLR.cc/2025/Conference/Submission2962/Reviewer_Les3"
],
[
"ICLR.cc/2025/Conference/Submission2962/Area_Chair_oVXi"
]
],
"structured_content_str": [
"{\"summary\": \"The paper considers selecting a stepsize for gradient descent, in particular when we cannot compute global quantities like smoothness parameters. Though there has been considerable work, including recently, on adaptive step size selection methods such as Adagrad, this paper takes a different view. The idea is to approximate the a step size that looks a lot like the Polyak step size, by quantities that can be estimated (the Polyak step size requires knowing f(x*)).\\n\\nThey use this step size on various experiments, including on the non-convex problem of training a 2 layer MLP.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper considers a significant and important problem.\\nThe problem is of current interest -- there are papers appearing about related topics every year.\\nThe proposal of a new step size is related to well studied step sizes like Polyak step size, but it seems to have some novel aspects.\", \"weaknesses\": \"The writing could be significantly improved. There are many examples where the writing deviates from grammatical English, even from the very beginning of the paper. For instance, \\u201cdue to quantity is not a priori knowledge.\\u201d \\u2014 lines 75-76. In some places this does not impede the understandability of the paper, but in others the problems with the writing indeed make it hard to properly understand what the paper is about, and what its contributions are.\\n\\nThe introduction is generally loose and imprecise, in areas where it should be specifying exactly what the area of contribution is, precisely because this is such a well-researched area. For example, the paper says that though there are several adaptive algorithms implemented and available, \\u201can adaptive stepsize theory has not been established.\\u201d This is confusing, since there are many theoretical papers about AdaGrad and other adaptive step size schedules in the last few years in ML and Optimization venues (not to mention that it is also a fairly classical topic). \\n\\nThen we are told that their optimal stepwise yields a linear rate with factor sin^2 \\\\eta_k \\u2014 but we do not know what \\\\eta_k is at this point in the paper. They they gone on to say that the theory applies to non-convex functions, but we are not told what is guaranteed in this case. At least an informal statement should be made explaining what is happening, if the authors wish to talk about it directly. \\n\\nProposition 2.1 says it guarantees convergence to a global optimum of GD, yet does not require in the statement that the function being optimized be convex. The proof also does not mention convexity, and indeed does not prove anything about global convergence. \\n\\nIn line 146, the paper says that they assume that the gradient is non-zero unless GD has already converged; but then they say that this means that it has converged to x*, but which I understand that the assumption is that they assume they are minimizing a function that has no stationary points other than the unique global optimum. \\n\\nThe experiments are also not particularly convincing. They need to better point to where the weaknesses are of other related methods, where this approach succeeds.\", \"questions\": \"What are the weakest assumptions that are required about the function f, in order for you to guarantee your results hold?\\n\\nWhat is the relationship to the Polyak step size (e.g., paper by Hazan and Kakade)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks the authors for the responses. However, there are no explanations to my questions, and I still think this paper can be significantly improved in terms of showing explicit convergence rate improvement.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper studies some adaptive size rules for smooth functions, including some theoretical optimal ones and practical approximations. Experiments show some advantages of these rules.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper uses several examples to demonstrate the benefits of using the proposed step size rules.\", \"weaknesses\": \"1. This paper lacks a comprehensive comparisons with prior art. A similar rule to the proposed step size rule is studied at least in reference [1] and revisited in [2]. For example, in [2], algorithm 2 is similar to the algorithm for practical use in this paper, and to compare rates with [2], can the authors provide details on how to interpret the term $\\\\Pi_{t = 0}^k \\\\delta_t$ in equation 3.2?\\n2. The optimal choice is not shown to be optimal in detail and not fully understandable to me, i.e., in what sense this choice if optimal, does it achieve fastest global convergence rate or fastest one-step descent?\\n3. The experiments in Figure 3 rely on a good guess of $\\\\bar{f}_0$, and this introduces another parameter for a step size rule designed for tuning free case.\\n\\n[1]. Boris T. Polyak. Introduction to optimization. Optimization Software, Inc., New York, 1987.\\n[2]. Hazan, Elad, and Sham Kakade. \\\"Revisiting the Polyak step size.\\\" arXiv preprint arXiv:1905.00313 (2019).\\n\\nBased on these weakness, I think this paper can be significantly enhanced by a thorough comparison with related works and detailed explanations of the improved convergence rates.\", \"questions\": \"1. Does Theorem 2.1 and Corollary 2.2 assumes $L$-smoothness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes a new adaptive stepsize for gradient descent that achieves exact linear convergence rates for convex optimization.\\n\\nThe key contribution is a novel stepsize formula based on the gradient and objective function.\", \"the_authors_provide_two_versions_of_the_stepsize\": \"a theoretical version and a practical version.\\n\\nThey demonstrate the efficacy of this approach through some preliminary examples.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"**S1:** The paper is well-written and easy to follow, with a clear presentation of the introduction and background on line-search-free first-order methods.\\n\\n**S2:** This paper proves a simple line-search-free variant of gradient descent to minimize smooth convex functions. The proposed stepsize can be dynamically adjusted to capture the curvature information of the problem, allowing for faster convergence.\\n\\n**S3:** The paper provides a rigorous proof of the linear convergence rate under the convex settings.\\n\\n**S4:** The paper includes empirical comparisons with other popular optimizers, such as Adam and N-AGD.\", \"weaknesses\": \"**W1.** The theoretical analysis relies on strong assumptions, namely that the objective function is convex and the optimal objective value $f(x^*)$ is known.\\n\\n**W2.** The only practical solution proposed in this paper is Algorithm 1. However, the authors do not provide a theoretical analysis for it. In particular, does Algorithm 1 converge in convex settings? What is its iteration complexity in an ergodic sense when the objective function is convex and non-convex?\\n\\n**W3.** Additional detailed discussion and analysis are necessary and would be beneficial to further clarify and present Algorithm 1. \\n1. For example, the auto-correction mechanism in Algorithm 1 explicitly requires $g(x) \\\\geq 0$; otherwise, $\\\\overline{f}_0$ may not serve as a reliable estimate of $f(x^*)$. \\n2. Taking the least squares problem in Problem (3.16) as an example, when $\\\\alpha >0$ and $\\\\alpha \\\\approx0$, Algorithm 1 could get stuck at a point that is neither a local nor a global minimum, as the second correction in Line 322 is never invoked. This can result in a less accurate estimation of $f(x^*)$.\\n\\n**W4.** Other issues: \\n\\n1) The proposed algorithm is only suitable for deterministic optimization problems, as it requires calculating the objective function value, making it incompatible with stochastic optimization models. Comparing it with stochastic optimizers like ADAM may be unfair, as ADAM is designed for stochastic settings while the proposed method is deterministic.\\n\\n2) It would be beneficial for the authors to include comparisons with other leading deterministic algorithms, such as AdaGrad-Norm (AdaGrad stepsizes: Sharp convergence over nonconvex landscapes, JMLR 2020), APGM (Adaptive Proximal Gradient Methods Are Universal Without Approximation, ICML 2024), and AdaBB (Adaptive Barzilai-Borwein method for convex optimization, 2024).\", \"questions\": \"**Q1.** Could the authors provide theoretical analysis (e.g., oracle or iteration complexity) for the proposed adaptive stepsize strategy in the case where $f(x)$ is non-convex?\\n\\n**Q2.** The authors mention using a commonly adopted mini-batch size of 128. Is this setting specific to ADAM? The proposed method may not directly extend to stochastic settings if it requires a dynamic estimation of $f(x^*)$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Main response: prior art clarification\", \"comment\": \"We thank the reviewers for spending valuable time reviewing our paper.\\n# Clarification on prior art\\nWe are completely not aware of Polyak's stepsize. **We rediscovered it**. This is why reviewers find that --- *`looks a lot like Polyak's stepsize, but seems to have some novel aspects'.* \\n\\n**We guarantee all results in our paper are independently developed from scratch.** After reviewers pointed out Polyak's stepsize, we found 7 related papers [1-7], and ours turns out only has a single overlap with them in a special case. Other results and our motivation all appear to be significantly different (or completely new), detailed below:\\n\\n- Overlap: \\n - Polyak's stepsize coincides with our tune-free case in eq.(1.6).\\n- Differences: \\n - (i) Our theoretical choice, $\\\\alpha_k^\\\\star =\\\\frac{ \\\\Vert \\\\bf{x}^\\\\star - \\\\bf{x}^{k} \\\\Vert }{ \\\\Vert \\\\nabla f (\\t{\\\\bf{x}}^{k} ) \\\\Vert} \\\\cos\\\\eta_k $ in **eq. (2.5)**, is not found anywhere else.\\n - (ii) All our convergence rates are exact, including $\\\\Pi_{t = 0}^k sin^2 \\\\eta_k $ in **eq. (2.6)** and $\\\\Pi_{t = 0}^k \\\\delta_t$ in **eq. (3.2)**, which we cannot find anywhere else. \\n - (iii) An adaptive selection range $ (0, 2 \\\\alpha_k^\\\\star) $ in **eq. (2.8)** for convex problems, which is a natural extension of the classical $ (0, 2/L) $, is not found anywhere else.\\n - (iv) Our result applies to a non-convex function, with an extended selection range $(2 \\\\alpha_k^\\\\star, 0)$ or $(0, 2 \\\\alpha_k^\\\\star) $ in **eq. (2.3)**, is not found anywhere else.\\n - (v) Polyak's stepsize requires $f(x^\\\\star)$, or at least a good lower bound. We proposed an auto-correction procedure in **Algorithm 1** to alleviate this bottleneck, applicable to both convex and non-convex problems, as in Sec. 4.2 and Sec. 4.3.2, with codes open-sourced in the supplementary material. These are not found anywhere else.\\n - (vi) We proved that our optimal choice $ \\\\alpha_k^\\\\star$ does not suffer from ill-conditioning in **Sec. 4.1** (for a well-known 2-d example), and proved a\\nstrictly better performance than the exact line search in **Sec. 4.1.1**. These are extremely surprising results, and we have not found anything similar.\\n\\n\\n- Our motivation: \\n - We are motivated by an obstacle --- our theoretical choice $\\\\alpha_k^\\\\star$ is not known in advance. We need a pre-known stepsize choice to satisfy $\\\\alpha_k \\\\in (0, 2 \\\\alpha_k^\\\\star) $. This motivates a lower bound choice in eq. (1.3).\\nAll papers in [1-7] do not have $ \\\\alpha_k^\\\\star$ or the feasible range $(0, 2 \\\\alpha_k^\\\\star) $. Things are different in the first place.\\n\\n\\n### Reference 1\\n\\n[1] Boris Polyak. Gradient methods for the minimisation of functionals. Computational Mathematics\\nand Mathematical Physics, 1963.\\n\\n[2] Boris Polyak. Introduction to optimization. 1987\\n\\n[3] Nikhil Devanathan and Stephen Boyd. Polyak minorant method for convex optimization. Journal of\\nOptimization Theory and Applications, 2024.\\n\\n[4] Xiaoyu Wang, Mikael Johansson, and Tong Zhang. Generalized polyak step size for first order\\noptimization with momentum, 2023.\\n\\n[5] Xiaowen Jiang and Sebastian U. Stich. Adaptive SGD with Polyak stepsize and line-search: robust\\nconvergence and variance reduction. NeurIPS, 2023.\\n\\n[6] Elad Hazan and Sham Kakade. Revisiting the polyak step size, 2022.\\n\\n[7] Nicolas Loizou, Sharan Vaswani, Issam Hadj Laradji, and Simon Lacoste-Julien. Stochastic polyak step-size\", \"for_sgd\": \"an adaptive learning rate for fast convergence, AISTATS 2021.\\n\\n## Our efforts for the literature review\\nFor related work, apart from the stepsize papers already cited in our Sec. 1.1,\", \"we_have_checked_the_following_fabulous_optimization_textbooks\": \"Boyd Stephen and Lieven [1], Nesterov [2], Ryu and Yin [3]. In addition, we have checked dozens of university lecture notes on gradient descent.\\nUnfortunately, **none of them mentioned 'Polyak's stepsize'** (many other works from Prof. Polyak are mentioned, but not the stepsize). \\n\\n**Admittedly, it is quite hard to find this prior art, unless knowing the exact name 'Polyak's stepsize'.** \\n\\nProf. Polyak is one of the greatest founders of the optimization field, it would be our honor to have a chance to cite his work.\\nBut again, we are completely not aware of Polyak's stepsize, which was published in 1963.\\nWe sense that the power of 'Polyak's stepsize' might be underestimated. If lucky, our paper might bring some extra revived interest.\\n\\n\\n### Reference 2\\n[1] Boyd Stephen and Vandenberghe Lieven. Convex Optimization. Cambridge University Press, 2004.\\n\\n[2] Yurii Nesterov. Lectures on Convex Optimization. Springer, 2018.\\n\\n[3] Ernest K Ryu and Wotao Yin. Large-scale convex optimization: algorithms & analyses via monotone operators. Cambridge University Press, 2022.\"}",
"{\"comment\": \"The authors have failed to respond regarding the mathematical errors of their work that I pointed out (e.g., uniqueness of the solution). For this reason I maintain my current score.\"}",
"{\"comment\": \"I have implemented the practical version of the proposed algorithm and found that it converges very fast.\\n\\nI encourage the authors to carry out a in-depth analysis on this algorithm.\\n\\nHowever, I maintain my original score considering a number of concerns put forward by the reviewers.\"}",
"{\"summary\": \"The paper proposes an adaptive stepsize selection scheme for gradient descent (GD). The main theoretical contribution is providing an expression for what is claimed to be an optimal stepsize choice, which depends on the (implicitly assumed to be unique) solution to the problem. For practical implementation, they propose approximating this with a Polyak-like stepsize estimating inf_x f(x). The authors provide convergence analysis and some numerical experiments on MNIST and quadratic optimization.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"Numerical experiments are performed and their plots are reported\"], \"weaknesses\": [\"The definition of the theoretical stepsize proposed depends on x* but it's not clear that x* is unique or that this stepsize is well-defined if x* is not unique. No further assumptions on the objective function f are ever stated to ensure uniqueness of x*. No discussion of what will happen if x* is not unique is given.\", \"The practical use stepsize given is just a Polyak stepsize approximating inf f by \\\\bar{f}_0. Yet, no reference to Polyak is made nor to any papers studying the Polyak stepsize and related variants, which are quite numerous. In this way the discussion of related work is severely lacking.\", \"The quality of writing is far below a level that is acceptable for publication. Many statements are mathematically incomplete (e.g., line 155 and many others) or outright incorrect (e.g., the Baillon-Hadad theorem on line 650). Many statements have implicit assumptions that are never stated and not always satisfied or verifiable (e.g., line 146 and many others). None of the convergence results make sense mathematically as there is no reason for x* to be unique - how can \\\\|x_k-x*\\\\|^2 go to 0 for two different x*?\", \"There is no comparison of the tuning-free algorithm to other tuning-free gradient descent algorithms, of which there is a significant body of work.\"], \"questions\": [\"Why are there no citations to relevant works on Polyak stepsize and tuning-free methods?\", \"What are the assumptions made on f for each of the results, and do they depend on x* being unique?\", \"Can you actually verify the assumptions you make on alpha_k in any way if you know in advance f or at least properties that it satisfies, e.g., Lipschitz-smoothness or gradient domination?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"I appreciate the authors' rebuttal, including the fact that they were previously unaware of the literature on Polyak Stepsize.\\n\\nOverall, there are still numerous parts of this paper that are not explained, and questions posed by myself and the other reviewers that have not been answered in the response.\"}",
"{\"metareview\": \"Despite the achievement of rediscovering the Polyak stepsize without knowing it before, it is quite clear that this paper does not offer any new insights of this celebrated method. All reviewers agree - and I join them - that this paper should be rejected.\", \"additional_comments_on_reviewer_discussion\": \"see metareview\"}"
]
} |
1MjOlHwCE6 | Reducing Complexity of Force-Directed Graph Embedding | [
"Hamidreza Lotfalizadeh",
"Omar Yaqub",
"Mohammad Al Hasan"
] | Graph embedding is a critical pre-processing step that maps elements of a graph network, such as its nodes or edges, to coordinates in a $d$-dimensional space. The primary goal of the embedding process is to capture and preserve various features of the graph network, including its topology and node attributes, in the generated embedding. Maintaining these graph features in the embedding can significantly enhance the performance of the downstream machine learning tasks. In this work, we introduce a novel family of graph embedding methods that leverage kinematics principles within a spring model and $n$-body simulation framework to generate the graph embedding. The proposed method differs substantially from state-of-the-art (SOTA) methods, as it does not attempt to fit a model (such as neural networks) and eliminates the need for functions such as message passing or back-propagation. Instead, it aims to position the nodes in the embedding space such that the total net force of the system is reduced to a minimal threshold, resulting in the system reaching an equilibrium state. The spring model is designed as a linear summation of non-linear force functions, with the shortest-path distance serving as the adjusting parameter for the force factor between each node pair, and therefore, inducing the graph topology in the force functions. In this work, we attempted to reduce the complexity of the original algorithm from $\log(n^2)$ to $n\log(n)$, while maintaining the performance metrics at a competitive level.
The proposed method is intuitive, parallelizable, and highly scalable. While the primary focus of this work is on the feasibility of the Force-Directed approach, the results in unsupervised graph embeddings are comparable to or better than SOTA methods, demonstrating its potential for practical applications. | [
"Graph embedding",
"Force-directed",
"representation learning",
"Spring model",
"Reduced complexity"
] | Reject | https://openreview.net/pdf?id=1MjOlHwCE6 | https://openreview.net/forum?id=1MjOlHwCE6 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"qxgynE8gQ7",
"o4JPCFKTFP",
"c4ClAmiJWp",
"WLtJFCmFpI",
"Df7opMXymx",
"B9ruYJuVjk"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review",
"meta_review"
],
"note_created": [
1737523911781,
1730430776308,
1729809172053,
1729168010192,
1730311518215,
1733844911763
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8480/Reviewer_tLNK"
],
[
"ICLR.cc/2025/Conference/Submission8480/Reviewer_nWvq"
],
[
"ICLR.cc/2025/Conference/Submission8480/Reviewer_exGs"
],
[
"ICLR.cc/2025/Conference/Submission8480/Reviewer_3mBP"
],
[
"ICLR.cc/2025/Conference/Submission8480/Area_Chair_R95j"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The short review is as follows: the paper proposes a new set of graph embedding methods which instead of using message passing or back propagation, it uses spring model to construct graph embedding. Each nodes' positional embedding is the equilibrium state. I think there are quite a lot of paper that proposes new graph embedding methods, and in order to make a proposed method to work it needs to capture (1) global and local structure information (2) able to be learned and proactively adapted, otherwise no one would ever use the newly proposed embedding methods. From a brief walkthrough of the paper, I don't think the proposed method can be used as a way that proactively learns embeddings for nodes and graphs, which are useful for downstream tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"NA\", \"weaknesses\": \"NA\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"While this paper presents an interesting force-directed graph embedding approach, the manuscript feels incomplete. I recommend that the authors include more powerful baselines (e.g., DGI [1], GraphZoom [2]) and conduct evaluations on larger graphs (with over 1M nodes) to better demonstrate improvements in accuracy and scalability for their next submission.\\n\\n[1] Veli\\u010dkovi\\u0107 et al., \\\"Deep graph infomax\\\", ICLR'19 \\\\\\n[2] Deng et al., \\\"GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding\\\", ICLR'20\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"NA\", \"weaknesses\": \"NA\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper presents a force-directed graph embedding method that reduces the computational complexity of an earlier approach proposed by Lotfalizadeh et al. The authors introduce a modification to limit the force computations to $k$-hop neighborhoods and a few randomly sampled nodes, resulting in a reduction from $O(n^2)$ to $O(n \\\\log(n))$ complexity. This makes the proposed method potentially more scalable for large graphs while maintaining competitive performance in unsupervised graph embedding tasks like link prediction and node classification.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(++) **Scalability Improvement**: The proposed complexity reduction from $O(n^2)$ to $O(n \\\\log(n))$ is a notable improvement.\\n\\n(++) **Practical Utility**: The paper demonstrates comparable or slightly better performance to some state-of-the-art graph embedding methods while offering scalability improvements, which suggests that the proposed approach has practical utility for large graph datasets.\", \"weaknesses\": [\"(----) **Limited Novelty**: The main contribution is an incremental improvement to the original method by Lotfalizadeh et al. The use of $k$-hop neighborhoods and stochastic sampling for complexity reduction, while useful, does not represent a fundamentally new idea in the context of graph representation learning. The paper offers no new theoretical contributions, insights, or analyses.\", \"(----) **Relationship to Previous Work**: The relationship to previous work, by Lotfalizadeh et al. (2023, 2024) is ambiguous. It is not clear how this work fundamentally extends the original force-directed embedding approach from these works.\", \"(--) **Limited Evaluation and Analysis**: The paper only evaluates the quality of the proposed embeddings using two downstream tasks: link prediction and node classification.\", \"(---) **Presentation Issues**: There are multiple signs that the paper is incomplete. Some examples:\", \"Placeholders such as \\\"!!! A PICTURE TO BE INSERTED for CAMERA READY !!!\\\" at Line 239 and \\\"!!! TO BE ELABORATED ON for CAMERA READY !!!\\\" at Line 452. The Discussion section is empty!\", \"Typographical errors such as \\\"topolofy\\\" on Line 234 and starting the sentences on Line 186 with lowercase letters.\", \"Broken reference on Line 301.\", \"$\\\\log(n^2)$ in Line 027 in the abstract should be $O(n^2)$.\", \"The notation $\\\\mathbf{z}_{uv} = \\\\mathbf{z}_v - \\\\mathbf{z}_u$ was introduced on Line 140 to facilitate brevity, then used in equation 3, not used in equations 8 and 9, then used again in equations 13 and 14.\", \"The paper mentions several well-known graph embedding techniques on Line 273, such as LINE, SDNE, DeepWalk, and Node2vec, but does not provide proper inline citations for them.\", \"(--) **Marginal Performance Improvement**: While not a deal breaker, the downstream task performance improvement on previous methods is marginal at best, as can be seen in Figures 3 and 5.\"], \"questions\": \"1. Could you clarify the differences between this paper and the previous work by Lotfalizadeh et al.?\\n2. Could you expand the empirical analysis and evaluation with more downstream tasks, e.g., multilabel classification or clustering?\\n3. Besides improved performance on downstream tasks, what desirable qualities do the FD embeddings have? E.g., the paper mentions reflecting the topology of the graph on Line 234 as a rationale for some of your choices. Would it be possible to evaluate that with metrics such as mean average precision?\\n4. Could you make your code available for reproducibility purposes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper presents a novel method for computing graph embeddings using a spring model without any neural network/model.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The overall idea is interesting and offers a complimentary perspective.\", \"weaknesses\": \"The paper seems incomplete.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper was submitted in a very incomplete state. I would like to discourage the authors from doing this in the future -- the review stage is not a place to get preliminary feedback on partial results, and by submitting incomplete papers like this, you do a disservice to the entire community, which is already plagued by high reviewer loads and difficulties in finding expert reviewers.\", \"additional_comments_on_reviewer_discussion\": \"NA. Paper was incomplete and I just asked reviewers to give short reviews. No discussion with authors.\"}"
]
} |
1MHgMGoqsH | Unifying Back-Propagation and Forward-Forward Algorithms through Model Predictive Control | [
"REN Lianhai",
"Qianxiao Li"
] | We introduce a Model Predictive Control (MPC) framework for training deep neural networks,
systematically unifying the Back-Propagation (BP)
and Forward-Forward (FF) algorithms.
At the same time, it gives rise to a range of
intermediate training algorithms with varying look-forward horizons,
leading to a performance-efficiency trade-off.
We perform a precise analysis of this trade-off on
a deep linear network, where the qualitative conclusions
carry over to general networks.
Based on our analysis, we propose a principled method to choose
the optimization horizon based on given objectives and model specifications.
Numerical results on various models and tasks
demonstrate the versatility of our method. | [
"deep learning optimization",
"model predictive control"
] | https://openreview.net/pdf?id=1MHgMGoqsH | https://openreview.net/forum?id=1MHgMGoqsH | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"o9ZilKrGlA",
"UPs2PD9kPH",
"6QInzPCeCz",
"4bQa56uy2L"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730752760971,
1730583761285,
1731907536776,
1730231918637
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1739/Reviewer_HB74"
],
[
"ICLR.cc/2025/Conference/Submission1739/Reviewer_s82N"
],
[
"ICLR.cc/2025/Conference/Submission1739/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1739/Reviewer_Wf2p"
]
],
"structured_content_str": [
"{\"summary\": \"Drawing inspiration from the model predictive control framework, this work proposes a framework for integrating back-propagation (BP) and the forward-forward (FF) algorithm (Hinton, 2022) for optimizing neural networks. In this framework, layer-wise local losses are back-propagated by $h$ steps, where the horizon $h$ is a user-provided algorithm parameter that controls a memory-performance trade-off. Here, $h=1$ corresponds to the FF algorithm, while $h=T$ for a $T$-layer neural net corresponds to backprop. A theoretical result is provided showing the convergence of the loss gradient to (a scaling of) the true gradient as $h \\\\rightarrow T$. Assuming linear increase in memory consumption with $h$, the work also proposes a heuristic for selecting $h$ adaptively given a particular deep learning optimization problem, and hardware constraints or performance requirements. Empirical studies show the approach may be feasible for obtaining optimization algorithms that enable trading off performance for memory with more flexibility than FF, as well as another alternative (LoCo).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper offers a _fresh_ perspective, unifying BP and FF with inspiration from the MPC framework in search of a more flexible family of optimization algorithms.\\n2. The results obtained (both theoretical and empirical) show some promise in terms of controlling memory and performance trade-offs via the horizon parameter $h$.\\n3. Experimental setup is reasonably well-structured and conducive to conveying the main messages of the paper.\", \"weaknesses\": \"1. The connection to MPC seems like a bit of a stretch and makes the paper unnecessarily harder to digest in my opinion. That is, I found Sec. 3.2 to be needlessly long and dense; the same horizon idea could be described in simpler terms. The reason why I think the MPC connection is a bit of a stretch is that MPC applies _the optimal solution_ of the opt. problem to control the system given the trajectory cost, whereas the proposed approach takes a single gradient step.\\n2. In L211-214, the comments on memory usage read as though FF and/or the proposed framework has better _complexity_ than BP (re. usage of the word \\\"growth\\\"), when in fact the complexity is the same and gains are only in terms of constants. Indeed, Fig. 3 shows that FF ($h=1$) reduces memory by some factor of 3-4x in the best case and at a huge performance discount. Given modern hardware and distributed training capabilities, this brings to question whether interpolating FF and BP is worth the effort and complication to begin with (Occam's razor).\\n3. The theoretical result in Thm. 3.4 does not surprise me. Just looking at Fig. 1, one can already see that the gradients will be aligned exactly for roughly $h/T$ fraction of the parameters. Once again, I am not convinced the gravity of the result is worth the complication. Furthermore, the commentary in L270-271 seem to claim that alignment of the gradients necessarily translate to better performance, which I don't believe is true. Consider the Newton direction, which almost never aligns with the gradient, yet would likely yield much better performance than the gradient (steepest descent dir.) if it could be feasibly computed.\\n4. The horizon selection algorithm requires some runs with $h=T$. If this is possible on the available hardware, why bother reducing memory usage (except maybe some atypical use cases)? \\n5. Fig. 3 (right) is missing bars on memory usage, which seems awkward and raises suspicion for the reader. Note also that the linear memory demand assumption seems only to hold for eager execution (but not static execution) of the backprop framework. This information should be highlighted in the main text. Currently it's only mentioned in Appx. E.1.\\n6. The same goes for the range of values considered on the x-axis of Fig. 2. The scale for the rightmost 2 plots should also go down to $\\\\approx 5 \\\\times 10^{-3}$ like the leftmost plot. \\n7. Overreaching claims: e.g., L492 says \\\"proposed horizon selection algorithm is _more efficient than_ BP\\\". Careful wording is critical for maintaining scientific tone. Perhaps it's better to say something like \\\"more memory-efficient than BP\\\" or \\\"better at optimizing our proposed objectives in (17-18)\\\".\", \"questions\": \"1. It is not clear to me why alignment gets worse with more training (re. Fig. 2 and L400-401).\\n2. Suggestion: Fig. 5 would be easier to read if the caption included a note \\\"lower is better\\\".\\n3. Suggestion: Before introducing (19), referring the reader back to (17-18) might improve readability.\\n4. Suggestion: The use of the word \\\"Objective\\\" in the sense of (17-18) can be confusing for the reader, seeing as BP and FF also optimize losses (or, \\\"objectives\\\").\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper tries to provide a unified training algorithm that connects BP and Forward-Forward algorithm based on the concepts or basic formulation in Model Predictive Control (MPC). The proposed training algorithm balances the accuracy and memory usage. The theoretical analysis is based on a deep linear model, followed by a horizon selection algorithm. Experiments are conducted by considering mang commonly used deep models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The proposed training algorithm is technically sound, which interpolates between BP and FF.\\n\\nThe writing is clear and easy to follow.\", \"weaknesses\": \"1.Motivation. I understand it is a doable research paper to trade off memory usage and accuracy. However, I feel it may not be necessary to sacrifice accuracy to gain memory efficiency, given that nowadays, we have relatively sufficient computation power to train large deep models, e.g., foundation models. Thus, it may be less pressing to consider this trade-off.\\n\\n2.Methodology. The proposed method simply borrows the concept of basic formulation of MPC without involving much technical content from MPC literature. Thus, I could not tell sufficient technical contribution in terms of methodology. Similarly, the title also seems misleading by emphasizing MPC too much.\\n\\n3.Theory. One apparent limitation is the authors only derive results based on deep linear models, which could be fundamentally different from modern (non-linear) deep models, such as ResNet and Transformers. Although the heuristic extensions to modern deep models in experiments validate the theoretical results, this limitation is still non-neglectable. \\n\\n4.Writing. The writing needs substantial improvement. Grammar errors include Line 35 (no subject) and Line 112 (not complete). The citation is also problematic, such as line 78.\\n\\n5.The proposed method is motivated from a mere optimization perspective without considering the generalization or learning theory, which can be fundamentally limited. For example, it seems that Figure 4 only consider the training loss instead of looking into the test loss.\\n\\n6.The choice of functions in Section 4 seems subjective, which is less convincing.\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We appreciate the reviewers\\u2019 time and constructive feedback. After further consideration, we have decided to withdraw the submission and will work on improving the paper based on the feedback provided\"}",
"{\"summary\": \"This paper presents a training framework for deep neural networks formalized based on Model Predictive Control (MPC), where the horizon length can be adjusted to encompass both Back-Propagation (BP) and Forward-Forward (FF) algorithms as special cases. The framework allows for a flexible trade-off between memory usage and model performance by varying the horizon length. The authors provide an asymptotic theoretical analysis for the class of deep linear networks, demonstrating that as the horizon length approaches the total number of layers (or blocks), the gradient computed by the framework converges to that obtained using full BP. Additionally, numerical experiments validate the framework, offering both theoretical and practical insights into its performance across different models and tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is overall well-written and clear.\", \"The approach of viewing deep neural network optimization through the lens of MPC is innovative and provides a fresh perspective.\", \"Additionally, the experiments and theoretical results are well-aligned and effectively complement each other, strengthening the overall argument of the paper.\"], \"weaknesses\": [\"The paper overlooks a significant body of prior works that address the memory limitations of BP. Notable examples include techniques like checkpointing, forward-mode automatic differentiation, forward gradients [1]. I recommend that the authors include a comparison of their MPC framework with these memory-efficient techniques, specifically highlighting how their approach differs from or improves upon these existing methods in terms of memory savings and performance.\", \"Moreover, the time complexity of the proposed framework is not discussed. Based on my understanding, the time complexity would likely be $\\\\mathcal{O}((T-h+1)h)$. For middle values of $h$, which the authors suggest might balance memory and performance, the time complexity actually increases by a factor of $\\\\mathcal{O}(T)$. In this case, one could potentially use forward-accumulation gradients with the same time complexity and achieve better memory efficiency, while still producing gradients identical to BP (no performance loss). I suggest the authors provide a detailed analysis of the time complexity of their approach and clearly articulate the advantages of their framework compared to existing methods, particularly in terms of time and memory efficiency. This comparison would help clarify the specific benefits of the proposed approach over alternatives.\", \"A key experiment demonstrating the practical applicability of the framework is missing, particularly one that shows it can train a model from scratch with a small drop in performance while achieving significant memory savings. Without this, it is difficult to assess whether the proposed approach is useful in practice. I suggest the authors consider adding an experiment that compares training a model from scratch using their MPC framework (with various horizon lengths) against standard backpropagation, reporting both performance metrics and memory usage. This would provide concrete evidence of the framework's practical benefits and limitations.\", \"[1] Baydin, At\\u0131l\\u0131m G\\u00fcne\\u015f, et al. \\\"Gradients without backpropagation.\\\" arXiv preprint arXiv:2202.08587 (2022).\"], \"questions\": [\"In Figure 3, what does \\\"full tuning\\\" refer to? Does this experiment involve training the models from scratch, or is it a fine-tuning process? I'm confused due to the use of \\\"full tuning\\\" in Figure 3 but \\\"fine tuning\\\" in Table D2.\", \"What is the significance of introducing the framework through MPC? Does it help with the analysis of the method? Given that intermediate terms cancel out in equation (6), the connection to MPC seems somewhat contrived and appears to introduce unnecessary complexity without providing clear benefits in understanding.\", \"Is the use of \\\"max\\\" in Objectives (1) and (2) a typo?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1L9vdc7BB5 | ADAPT: Adaptive Prompt Tuning for Pre-Trained Vision-Language Models | [
"Zhenhan Huang",
"Tejaswini Pedapati",
"Pin-Yu Chen",
"Jianxi Gao"
] | Prompt tuning has emerged as an effective way for parameter-efficient fine-tuning. Conventional deep prompt tuning inserts continuous prompts of a fixed context length into the input to each layer. When a pre-trained model is tailored to a specific downstream task, different layers initialized with pre-trained weights might have, depending on the distribution shift type, different levels of deviation from the optimal weights. Inserted prompts with a fixed context length might have redundant context tokens or insufficient context length. To address this issue, we propose a deep continuous prompting method dubbed Adapt that encourages heterogeneous context lengths. Context lengths are automatically determined by iteratively pruning context tokens. We use the saliency criterion for the neural network pruning to compute the importance scores of context tokens in order to determine which tokens to prune. We examine the proposed method on the pre-trained vision-language model CLIP. Extensive experiments on 11 downstream datasets reveal the advantage of Adapt: the average test accuracy increases from 79.83% to 81.70%. The highest performance gain on individual datasets is 9.63%. At the same time, the computational overheads are comparable to or smaller than baseline methods. | [
"Prompt Tuning; Multimodality; Vision-Language Models; Network Pruning"
] | Reject | https://openreview.net/pdf?id=1L9vdc7BB5 | https://openreview.net/forum?id=1L9vdc7BB5 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"z2rJl8OMs7",
"z00GyztQeQ",
"yNgZK6D7g1",
"xZIiHIOyS9",
"wt3TDgDjrg",
"wqT9hlMb97",
"rUJfxapqq0",
"qWtxOIR3lY",
"oDU6aSMsIh",
"gWXmla5cw0",
"g5JiZAPsVG",
"fqcORHYpBH",
"fH1oScpVwo",
"e4LqLAABqh",
"cxA3aKGKat",
"c8QFuekBLV",
"aWpdnq699W",
"Xt69Jk6J6b",
"VAQlOERsqO",
"Ts4G6x5vRT",
"RJhAgjxkH6",
"PoHgXUzkT1",
"PVCF08vTPX",
"J3OKJDOBd1",
"FdjdHrtT7D",
"EJDWddGp1d",
"ATIavN7Add",
"9Kq9XSJPo2",
"5U6mDR7aF6",
"1Dp8XwFRFY"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1733202840508,
1732214876831,
1732627079279,
1732565726511,
1732216628192,
1732670104955,
1729754756372,
1732213697699,
1731054332026,
1732526660663,
1732216651581,
1730624906513,
1733088340285,
1733088390045,
1734668967289,
1732743998037,
1730645390476,
1732612898378,
1732584820900,
1733088177894,
1732422419511,
1732526846710,
1732729494648,
1732526821689,
1732653039652,
1732728906116,
1732214221558,
1732614435121,
1737523904297,
1732215849034
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_qkws"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_qkws"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_Xqnt"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_Tr3E"
],
[
"ICLR.cc/2025/Conference/Submission8374/Area_Chair_yWcZ"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_bqgw"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Area_Chair_yWcZ"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_qkws"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_qkws"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_Xqnt"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_Xqnt"
],
[
"ICLR.cc/2025/Conference/Submission8374/Area_Chair_yWcZ"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Area_Chair_yWcZ"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8374/Reviewer_bqgw"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8374/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Official Comment by Reviewer qkws\", \"comment\": \"Thank you for your response and the experiment of prompt learning in LLM.\\n\\nAfter reading the rebuttal and other reviewers' comments, I will raise my score to 5. But this paper still has room for improvement (*e.g.*, the method effectiveness in 1/2/4/8 shot settings, and more related works in 'Sparse Training' section).\"}",
"{\"comment\": \"The authors thank the reviewer for the comments. Here is one-to-one response:\\n\\n- Weakness 1: We would like to start by clarifying that reference [1] only focuses on applying differentiable masks to model weights, which is different from our soft prompt learning setup. The differences between reference [1] and Adapt are:\\n\\n1. [1] uses differentiable masks while Adapt does not use differentiable masks. The variation of masks in [1] depends on the loss function and threshold function. Our method changes masks by pruning process and scoring functions. Hence, using differentiable masks introduces additional trainable parameters.\\n2. [1] applies masks to the model parameters (e.g. MSHA layer parameters) while Adapt applies masks to soft prompts. \\n3. [1] uses the mask to **zero out** some model parameters (element-wise multiplication) while Adapt uses the mask to **select** valid context tokens and there will be no zero-out tokens. When applying [1] to deep prompts, the context length **does not change**. When applying Adapt, the context length **does change**.\\n\\nFollowing the reviewer's suggestion, we did experiments using differentiable masks with the closest possible setup. Since deep prompting methods for vision-language models such as MaPLe [2] and VPT [3] do not insert the deep prompts in the same way as we do (comparison is shown in Figure 2 (a) and (c)), we tested the differentiable mask + traditional deep prompting denoted as \\u201cVPT + Differentiable Mask\\u201d (prompts are inserted for query, key and value and removed after self-attention) and the differentiable mask + our deep prompting method denoted as \\u201cAdapt + Differentiable Mask\\u201d (prompts are inserted for only key and value computation, no inserted prompts are removed after self-attention).\\n\\nDifferentiable masks do not apply a constraint on the total context length, so the prompt complexity can be different on different datasets. For a fair comparison, we include the performance of Adapt (adaptive $T_{target}$) that uses optimal $T_{target}$ determined by the validation dataset (details are described in Appendix A.9). In differentiable mask methods, the threshold alpha for the binary function I(M > alpha) is set to be 0.5. We do find a performance degradation using differentiable masks, especially using the traditional deep prompting method (a comparison between traditional deep prompting and our deep prompting methods is shown in Figure 2 (a) and (c)). The performance comparison is shown below:\\n\\n| Method | Caltech101 | DTD | EuroSAT | Aircraft | Food101 | Flowers | Pets | Cars | Sun | UCF | ImageNet | Average |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Adapt | 95.63 | 72.03 | 92.53 | 50.93 | 83.47 | 97.97 | 91.07 | 86.17 | 73.67 | 84.40 | 70.83 | 81.70 | \\n| Adapt (Adaptive $T_{target}$) | 96.17 | 72.17 | 92.60 | 52.07 | 87.03 | 98.40 | 92.47 | 86.70 | 75.33 | 84.40 | 72.07 | 82.67 |\\n| Adapt + Differentiable Mask | 95.53 | 71.87 | 92.17 | 51.03 | 82.17 | 97.43 | 88.90 | 85.70 | 73.03 | 83.73 | 70.03 | 81.05 |\\n| VPT + Differentiable Mask | 93.60 | 64.13 | 71.80 | 33.57 | 84.30 | 88.73 | 90.37 | 71.73 | 71.17 | 73.63 | 70.37 | 73.95 |\\n\\nWe have added this new result and cited [1] in Appendix A.13.\\n\\n- Question 1: We examine the performance of using $T_{target} = 256$ and $T_{target} = 512$. The new results are included and marked in blue color in Table 2 of the updated manuscript. The results empirically indicate that the upper bound for the average accuracy is 81.70%.\\n\\n- Question 2: In Adapt, prompt depth and context length are automatically determined. Hence, they are not hyperparameters of our model. The hyperparameters used by Adapt is $T_{target}$, $r_p$ (pruning rate), and accumulation steps (number of steps to compute the accumulated scores). The effect of $T_{target}$ is reported in Table 2. The effect of $r_p$ is reported in Appendix A.3.\\n\\nPlease refer to line 13 in Algorithm 1 and Figure 8 in Appendix. The prompt depth and context length are automatically determined.\\n\\n- Question 3: We add results using 1/2/4/8-shot training setting in Appendix A.10 using Adaptive $T_{target}$. The Adaptive $T_{target}$ is reported in Appendix A.9 and The few-shot learning result is shown in Appendix A.10.\\n\\n[1] Zheng, Kecheng, et al. \\\"Regularized mask tuning: Uncovering hidden knowledge in pre-trained vision-language models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[2] Khattak, Muhammad Uzair, et al. \\\"Maple: Multi-modal prompt learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[3] Jia, Menglin, et al. \\\"Visual prompt tuning.\\\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\"}",
"{\"comment\": \"We thank the reviewer for the thoughtful feedback and for recognizing the strengths of our work. We are especially grateful for the positive assessment and the raised scores\"}",
"{\"comment\": \"The authors thank the reviewer for the feedback. Here is the one-to-one response:\", \"weakness_2\": \"Following the suggestion, we combined main paper and appendix in the same PDF file. The new file has been uploaded.\", \"weakness_7\": \"We thank the reviewer for the clarification and the discussion on dynamic spars training. We added a paragraph named \\u201cSparse Training\\u201d in Section 2 Related Work. Please refer to the updated manuscript.\\n\\nIn the level of dynamic training, our method is similar to dynamic sparse training. But there are some key differences:\\n\\n1. Our method works on soft prompts while dynamic sparse training works on model parameters, specifically, the weights of linear transformation layers for query, key and value.\\n\\n2. Dynamic sparse training uses element-wise multiplication for model weights while we use masks to select important soft prompts. The selection will cause different context lengths at different depths while element-wise multiplication does not change context lengths at different depths.\\n\\n3. Dynamic sparse training considers sparsity distribution to avoid \\u201cover-pruning\\u201d that might cause disconnectivity of neural layers. In our method, if all context tokens for one transformer layer are pruned, it indicates that prompt depth decreases by one. There is no \\u201cover-pruning\\u201d issue in our method.\"}",
"{\"comment\": \"The authors thank the reviewer for the feedback. Here is one-to-one response:\\n\\n- Weakness 1: The ICLR submission rule suggests the total number of pages to be 9. Following the reviewer\\u2019s suggestion, we add the analysis of the total context lengths in Appendix A.8.\\n\\nWe believe the comment \\u201cThere is more than one page to write.\\u201d is a misunderstanding. In the page length section of ICLR call for papers, it is stated that \\u201cWe encourage authors to be crisp in their writing by submitting papers with 9 pages of main text. We recommend that authors only use the longer page limit in order to include larger and more detailed figures. However, authors are free to use the pages as they wish, as long as they obey the page limits.\\u201d (please refer to the submission guidance https://iclr.cc/Conferences/2025/CallForPapers).\\n\\nIn our original submission, we show the result of pruned binary masks in Appendix Fig. 8. The results indicate that the context lengths are highly heterogeneous. We added a section (Appendix A.8) to analyze the total context lengths in text and image branches. The total context lengths are highly related to datasets as shown in Appendix Figure 6.\\n\\n- Weakness 2: We believe this is an oversight. We did upload the appendix with our original submission. Please refer to the link for supplementary material.\\n\\n- Weakness 3: The number of tokens to be pruned depends on the importance of the tokens. We use a scoring function to evaluate the importance of tokens. We do not constrain the number of tokens per layer. Please refer to Algorithm 1 for more details. If the context length for a certain layer is 0, it means there is no prompt for this layer.\\n\\n- Weakness 4: Prune rate $r_p$ is a hyperparameter. We show the result of using different pruning rates in Appendix A.3. The result suggests that different pruning rates do not significantly change the final mask, which indicates the robustness of the pruning strategy.\\n\\n- Weakness 5: We added more text explanations for the algorithm at the bottom of page 5 and text descriptions in the Algorithm 1 (line 7, 8, 9 and 12).\\n\\n- Weakness 6: We split the paragraphs in Introduction into multiple paragraphs. The first paragraph gives an overview of the PEFT methods and introduces prompting methods. The second paragraph elaborates on the two categories of prompting methods. The third paragraph talks about works related to continuous prompts. The fourth paragraph talks about the motivation why we propose our method. The fifth paragraph gives the whole picture of the Adapt method.\\n\\n- Weakness 7: We are not sure which dynamic neural networks the reviewer is referring to. Can the reviewer give some related references?\\n\\nWe checked classic dynamic neural network references. One line of work focuses on dynamic neural architecture. The depth [1-2], width [3-4] and structure [5] can change. Adapt uses the pre-trained model and the model weights and structure are fixed during the training process. Another line of work focuses on dynamic parameters. The model parameters such as kernel shape [6], channels [7] and features [8] can change. Adapt changes the context lengths during the training process, which do not affect the model parameters or structures. Dynamic neural networks generally use differentiable parameters to adaptively change parameters. The parameters in Adapt are not determined in a differentiable way. It changes in a discrete way.\\n\\nWe thank the reviewer\\u2019s suggestion. Here is what we did to improve the presentation:\\n\\n1. We changed \\u201cAdapt\\u201d to \\u201cAdapt (ours)\\u201d in Figure 1\\n2. We split the paragraphs in the Introduction into multiple paragraphs\\n3. We added one paragraph to explain more details in Algorithm 1 and text descriptions within Algorithm 1\\n4. We added the analysis of total context lengths in Appendix A.8\\n\\n- Issue: We thank the reviewer\\u2019s suggestion. We changed \\u201cAdapt\\u201d to \\u201cAdapt (ours)\\u201d in Figure 1.\"}",
"{\"title\": \"Official Comment by Reviewer qkws\", \"comment\": \"Thanks for your response.\\n\\nI greatly appreciate the author's design in the subfield (i.e. using binary masks to solve how CLIP can be used for downstream task training, such as few shot) that utilizes binary masks effectively. We would like to clarify my viewpoint again. I am not claiming that ADAPT has low novelty due to the similar notion of \\\"binary mask\\\". But rather, in this subfield, the use of binary masks for trasferring CLIP to few-shot tasks has already been explored, and I think the core idea 'use binary mask for transferring CLIP to few-shot' is similar. The authors claim that 'this is the first work to prune prompts'. If the core idea is to 'use binary mask in prompts', prompt learning has a wide rang e of applications and should be validated for its effectiveness in various tasks, such as prompt learning in LLM.\\n\\nAdditionally, the ADAPT is highly dependent on the amount of data (more than 8 shots). As shown in Figure 1, the authors only show the results in 16 shot, but in 1/2/4/8 shot setting the performance has significantly decreased. In few-shot setting of CLIP, 1/2/4/8 shot settings are also important. \\n\\nThe initial version did not provide a detailed comparison with the binary mask scheme used in the CLIP field in related work and experiments. I find that the updated version provided these and can be improved by more detailed comparision and related works. After carefully reading the updated version, I will raise my score to 5 based on the Sparse Training of related work and comparision with differential mask from the authors. But I am still concerned about performance degradation with smaller shot.\"}",
"{\"summary\": \"In this paper, the authors propose adaptively pruning prompt tokens during the prompt tuning, rather than using fixed prompt length. They use metrics in network pruning to compute the importance scores of prompt tokens and prune less important tokens gradually.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"Strength:\\n+The average performance on 11 downstream datasets verifies the effectiveness of the proposed methods.\\n+The proposed method shows slightly fewer FLOPs than existing methods.\\n+Adaptively changing the prompt tokens is an interesting idea.\", \"weaknesses\": \"Weakness:\\n1. There is more than one page to write. It looks like a paper in progress \\u00a0The authors should consider to include more experiments and analysis. For example, the authors can show that different datasets prefer different prompt token lengths to verify the importance of the proposed method.\\n\\n2. In line 377, the authors write \\u201cThe result is shown in Appendix Figure 4. However, the appendix is missing. The authors should move it from the supplementary material to the end of the main paper.\\n\\n3. How do we determine the number of tokens to prune each each layer? \\n\\n4. How to set the number of prune steps rp.\\n\\n5. There are too many mathematical symbols, especially in Algorithm 1, making it hard to understand, even though the operation used in this paper is easy. The authors should improve this to improve the readability.\\n\\n6. There are only two paragraphs in the Introduction Section. The authors should consider splitting them into more paragraphs.\\n\\n7. The proposed methods are highly related to dynamic neural networks. The authors should discuss it and cite related papers.\\n\\nI think that the idea of this paper is good enough. However, the authors should improve their presentation.\", \"issues\": \"In Figure1, the authors should indicate the proposed method with \\u201cAdapt (Ours)\\u201d.\", \"questions\": \"See Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"The authors thank the reviewer for the feedback. Here is one-to-one response:\\n\\n- Weakness 1: Some datasets do not have the best performance\\n\\nWe understand the reviewer\\u2019s concern and agree that this is an important issue to elaborate on. Despite the performance variances across datasets, our method achieves the best performance on 7 out of 11 datasets and ranks in the top 3 on 8 out of 11 of them. The second best method LAMM achieves the best performance on 0 out of 11 datasets and the top 3 performance on 7 out of 11 datasets. The third best method ProGrad achieves the best performance on 2 out of 11 datasets and the top 3 performance on 6 out of 11 datasets. \\n\\nWe force the same $T_{target}$ to ensure a similar number of trainable parameters across datasets. When allowing different $T_{ target}$ for different datasets, we can have a significant performance gain on some datasets such as Food101 as shown in Table 1 of the updated manuscript. The details regarding Adapt (adaptive $T_{target}$) are reported in Appendix A.9.\\n\\nWe also want to note that the \\u201cno-winner-takes-all\\u201d finding has been consistently observed in the study of continuous prompting methods for VLMs. When PLOT (ICLR 2023) [1] is published, there are two prompting method baselines. It achieves the best performance on 6 out of 11 datasets. When LAMM (AAAI 2024) [2] is published, there are two prompting method baselines. It achieves the best performance on 7 out of 11 datasets. Thus, none of the methods consistently rank within the top 3 on all the datasets\\n\\nOn the Pets dataset, different methods have similar performance. Even the zeroshot CLIP can achieve reasonably good performance. We would like to draw attention to the fact that our method has pronouncedly better performance on challenging datasets such as DTD, EuroSAT and Aircraft where the zeroshot CLIP has poor performance.\\n\\n- Weakness 2: Heterogeneous prompt lengths could make the model harder to implement in practical scenarios\\n\\nOur method uses $T_{target}$ as a hyperparameter to automatically determine the context lengths at different depths by pruning (please refer to Algorithm 1 line 13). The manually designed deep prompting method has a hyperparameter of prompting depth. Hence, different depths can have different context lengths (when the depth is equal to or smaller than d, the context length is t per layer; when the depth is larger than d, the context length is 0 per layer). Our method intentionally introduces heterogeneous context lengths and finds that it achieves better performance than manually designed homogeneous context lengths.\\n\\nWhen we report the performance, it is averaged over 3 runs. We included the standard deviation in Appendix A.6.\\n\\nFinally, to fully address the reviewer\\u2019s comment, could the reviewer elaborate on the difficulty in practical scenarios in terms of consistency and predictability?\\n\\n- Weakness 3: There is no explicit mechanism to ensure the two branches are aligned\\n\\nThis is a great suggestion for an ablation study. In Adapt, we intentionally enable the different total context lengths in the text and image branches. Therefore, the setting of the same total context length in text and image branches is included in our search space of the current setting. Following the reviewer\\u2019s suggestion, we conducted experiments that ensured the same context lengths in the image and text branches. The modification we made was that instead of combining scores in text and image branches, we ranked scores in the text branch and scores in the image branch. In each pruning step, we pruned r_p tokens in the text branch and r_p tokens in the image branch. We noticed a pronounced performance degradation by applying constraints on the total context length of two branches to be the same (Adapt Constraint). The results comparison is:\\n\\n| Method | Caltech101 | DTD | EuroSAT | Aircraft | Food101 | Flowers | Pets | Cars | Sun | UCF | ImageNet | Average | \\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Adapt w/ constraint | 95.43 | 69.17 | 81.50 | 49.13 | 83.30 | 98.03 | 87.47 | 84.13 | 73.80 | 83.37| 69.93 | 79.57 | \\n| Adapt w/o constraint | 95.63 | 72.03 | 92.53 | 50.93 | 83.47 | 97.97 | 91.07 | 86.17 | 73.67 | 84.40 | 70.83 | 81.70 | \\n\\nThe experiment indicates that two branches do not need to be aligned. Less constraint is beneficial to fully exploit the power of prompt tuning. Please refer to Appendix A.11 for more details.\\n\\n- Question 1: more details on the scoring function\\n\\nIn our initial submission, we tested Snip, Gradient norm and l2-norm as scoring functions (shown in Eq. 6 and the ablation study \\u201cScore computation\\u201d in Sec. 4.3). We empirically found that Snip works better. Snip considers gradients and magnitudes of model parameters. Gradient norm only considers gradients. l2-norm only considers magnitudes of model parameters.\"}",
"{\"summary\": \"To address the limitations of fixed-length prompt tuning approaches for pre-trained vision-language models, the authors propose ADAPT, an adaptive prompt tuning method that dynamically determines optimal prompt lengths during fine-tuning. By employing an iterative pruning strategy, ADAPT identifies and removes less relevant prompt tokens at each layer, allowing efficient parameter usage while maintaining model performance. The authors evaluate ADAPT across 11 benchmark datasets, demonstrating that the method significantly reduces the number of parameters required while achieving competitive or improved accuracy. This adaptive approach highlights the benefits of automatic context length adjustment compared to manually designed fixed-length prompts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors propose a novel adaptive prompt tuning approach, ADAPT, that effectively reduces the number of parameters needed for pre-trained vision-language models while maintaining competitive performance across a variety of downstream tasks. This efficiency is a notable contribution to prompt-based fine-tuning methods.\\nBy leveraging an iterative pruning mechanism, ADAPT dynamically adjusts the prompt lengths for different layers, enabling a flexible solution that outperforms traditional fixed-length prompt tuning methods, particularly in scenarios that require task-specific adaptations.\\nThe approach is validated on 11 diverse datasets, covering different vision-language tasks. This broad evaluation demonstrates the adaptability and applicability of ADAPT across a wide range of contexts.\\nThe pruning process used by ADAPT results in heterogeneous context lengths, automatically determining the optimal prompt length at each layer, which is an improvement over manually designed prompts that tend to be homogeneous and less efficient.\", \"weaknesses\": \"ADAPT shows significant performance degradation in certain categories, such as the Pets class, where it fails to rank even in the top three. It is regrettable that the authors did not conduct further discussion and research on this issue.\\nThe highly heterogeneous prompt lengths determined by the pruning mechanism could make the model harder to implement in practical scenarios where consistency and predictability are valuable, compared to using manually fixed homogeneous prompt lengths.\\nAlthough ADAPT optimizes both text and image branches independently, there is no explicit mechanism mentioned to ensure that the branches remain aligned in terms of context length adjustments. This could potentially lead to imbalances that affect the model's overall performance.\", \"questions\": \"Could the authors provide more details about the scoring function used to determine token importance during pruning? Were any alternative scoring mechanisms considered, and if so, why was the current approach chosen?\\nHow does ADAPT ensure stability during the pruning process, especially given the highly heterogeneous prompt lengths across different layers? Are there any safeguards in place to avoid over-pruning, where the model could lose important contextual information?\\nThe evaluation on 11 datasets showed varying degrees of performance, with some datasets exhibiting reduced accuracy compared to the baseline. Could the authors elaborate on the potential reasons behind these inconsistencies and suggest strategies that could mitigate these issues in future iterations of ADAPT?\\nGiven the independence of the pruning processes for the text and image branches, is there any mechanism in place to maintain synchronization between the two branches during training? If not, could this lead to potential issues in multimodal understanding?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer Tr3E,\\n\\nCould you kindly review the rebuttal thoroughly and let us know whether the authors have adequately addressed the issues raised or if you have any further questions.\\n\\nBest,\\n\\nAC of Submission8374\"}",
"{\"comment\": \"[1] Wang, Xin, et al. \\\"Skipnet: Learning dynamic routing in convolutional networks.\\\" Proceedings of the European conference on computer vision (ECCV). 2018.\\n\\n[2] Veit, Andreas, and Serge Belongie. \\\"Convolutional networks with adaptive inference graphs.\\\" Proceedings of the European conference on computer vision (ECCV). 2018.\\n\\n[3] Bengio, Yoshua, Nicholas L\\u00e9onard, and Aaron Courville. \\\"Estimating or propagating gradients through stochastic neurons for conditional computation.\\\" arXiv preprint arXiv:1308.3432 (2013).\\n\\n[4] Mullapudi, Ravi Teja, et al. \\\"Hydranets: Specialized dynamic architectures for efficient inference.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n\\n[5] Huang, Gao, et al. \\\"Multi-scale dense networks for resource efficient image classification.\\\" arXiv preprint arXiv:1703.09844 (2017).\\n\\n[6] Gao, Hang, et al. \\\"Deformable kernels: Adapting effective receptive fields for object deformation.\\\" arXiv preprint arXiv:1910.02940 (2019).\\n\\n[7] Yang, Brandon, et al. \\\"Condconv: Conditionally parameterized convolutions for efficient inference.\\\" Advances in neural information processing systems 32 (2019).\\n\\n[8] Su, Hang, et al. \\\"Pixel-adaptive convolutional neural networks.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.\"}",
"{\"summary\": \"The paper assumes that a fixed context length for prompts may lead to either redundant context tokens or insufficient context length when transferring a pre-trained model to downstream tasks. Based on this assumption, the paper proposes a method to automatically determine the prompt length.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method, ADAPT, changes the context lengths for different transformer layers by iteratively pruning context tokens. ADAPT surpasses the SOTA method on 16-shot image classification tasks.\", \"weaknesses\": \"It is unclear why the convergence of model training is determined solely by reaching T_target. T_target may vary across different training datasets, but it is set to a fixed value for all datasets. Additionally, if the mask for the text encoder is too sparse, this training target might restrict the sparsity of the mask for the image encoder.\\n\\nThe paper should provide a more detailed analysis of the learned binary masks. According to Figure 3, on the EuroSAT dataset, more context tokens are required in the middle layer of the image encoder, while the first layer of the text encoder requires more context tokens. An analysis of this discrepancy should be included.\\n\\nADAPT is trained and evaluated on the few-shot classification task, following the CoOP methodology. Thus, it should also report results under other training settings (1-shot, 2-shot, 4-shot, and 8-shot) to enable a more comprehensive comparison with state-of-the-art methods.\\n\\nMoreover, UPT[1] should be included for comparison, as it also introduces prompts in both the text and image encoders, similar to ADAPT.\\n\\n[1] Unified vision and language prompt learning.\", \"questions\": \"Please see the questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We thank the reviewer for the time and effort spent in evaluating our work, As the author-reviewer discussion phase is ending soon, we have not had a chance to engage with the reviewer. We would like to kindly ask if our response solves all of the reviewer\\u2019s concerns.\"}",
"{\"comment\": \"We thank the reviewer for the time and effort spent in evaluating our work. As the author-reviewer discussion phase is ending soon, we would like to kindly ask if there is any feedback based on our latest response (https://openreview.net/forum?id=1L9vdc7BB5¬eId=c8QFuekBLV).\"}",
"{\"metareview\": \"(a) The paper proposes a deep continuous prompting method called Adapt to address the limitations of conventional deep prompt tuning. Adapt encourages heterogeneous context lengths by automatically determining and pruning context tokens based on saliency scores.\\n\\n(b) Strengths: The paper is well-written and presents a clear explanation of the proposed method. Extensive experiments on 11 downstream datasets demonstrate the effectiveness of Adapt. The idea of adding masks to prompts at different depths is an interesting and innovative approach.\\n\\n(c) Weaknesses: The paper lacks clarity on how to set the number of pruning steps. The excessive use of mathematical symbols, particularly in Algorithm 1, hinders readability and should be improved. The introduction section is too brief and could benefit from further elaboration and paragraph breaks. The paper does not sufficiently discuss its relationship with dynamic neural networks, the convergence of model training, or the impact of setting T_target as a fixed value. Additionally, the analysis of learned binary masks and a more comprehensive comparison with other training settings and methods, such as UPT, is needed.\\n\\n(d) The most important reasons for reject are: This work fails to compare/discuss several important baselines (e.g., PromptSRC[1]) in this field and the AC finds that there some incorrect performance shown in the tables/figures. First, this paper seems to deliberately ignore some strong sota baseline, e.g., PromptSRC[1]. The performance of PromptSRC in few-shot setting with 16-shot is 82.87, whilst the performance of the method in this work is 82.67. We suggest that the authors at least mention these baselines, rather than completely ignoring them. If the performance does not outperform, could the authors' method be applied on top of these baselines? Second, the AC discovers that there are inconsistencies between the performance reported in this work (MaPLe performance in Figure 7) and the published one [1] (MaPLe performance in Table 13). For the few-shot performance of MaPLe from 1-shot to 16-shot, this work reports around 65.5, 70, 73, 75.5, 78 from the Figure 7; whilst [1] reports 69.27,72.58, 75.37, 78.89, 81.79. The settings (few-shot 1, 2, 4, 8, 16) and methods (MaPLe) are identical, so the AC is unclear about the reason for such a large gap.\\n\\n[1] Self-regulating Prompts: Foundational Model Adaptation without Forgetting, ICCV 23 (Google Citation 100+).\", \"additional_comments_on_reviewer_discussion\": \"(a) Reviewer Tr3E points out that ADAPT shows significant performance degradation in certain categories, such as the Pets class, and lacks further discussion on this issue. The highly heterogeneous prompt lengths resulting from the pruning mechanism could reduce the model's practicality due to the need for consistency and predictability. Additionally, the lack of an explicit alignment mechanism between the text and image branches may lead to imbalances, potentially affecting the model's overall performance. The reviewer fails to respond during the discussion phase.\\n\\n(b) Reviewer qkws highlights that while adding learnable masks to prompts is an interesting idea, it is similar to existing methods and lacks a discussion on the differences with related work. The paper does not explore the effect of larger T_target values on performance, and the upper bound of the proposed method remains unclear. Additionally, the reviewer suggests conducting ablations on prompt depth and context length and providing results for various few-shot training settings to better demonstrate the method's effectiveness. The reviewer actively participates in the discussion phase and raises the score from 3 to 5, stating there is still room for improvement (e.g., the method effectiveness in 1/2/4/8 shot settings, and more related works in 'Sparse Training' section).\\n\\n(c) Reviewer bqgw points out that the convergence of model training is overly reliant on a fixed T_target, which may not be suitable for all datasets, and could lead to issues with mask sparsity in the image encoder. The paper lacks a detailed analysis of the learned binary masks, particularly regarding discrepancies in context token requirements across layers. Additionally, the reviewer suggests including results for various few-shot training settings and comparing ADAPT with the UPT method for a more comprehensive evaluation. The rebuttal addresses most of the concerns. The reviewer finds that automating the selection of T_target would be a valuable improvement thus maintains the initial score of 5.\\n\\n(d) Reviewer Xqnt suggests that the paper lacks clarity in setting the number of prune steps (rp) and in the excessive use of mathematical symbols, particularly in Algorithm 1, which hampers readability. The introduction should be expanded into more paragraphs, and the authors should discuss and cite related work on dynamic neural networks. Additionally, the reviewer recommends improving the overall presentation and labeling the proposed method as \\\"Adapt (Ours)\\\" in Figure 1. The questions are well addressed and the reviewer raises the score to 6.\"}",
"{\"comment\": \"We thank the reviewer for the response and appreciate that the reviewer decided to raise the score. Your comments and suggestions are tremendous in helping us to better reflect our contributions and findings.\\n\\nFollowing your comment on the limitation of the few-shot setting, we added a new paragraph about the limitation of the Adapt method in Section 5 of the updated version.\\n\\nWe also agree with your comment that showing Adapt is beneficial to language models can further demonstrate its impact and effectiveness. Due to the time limit, we did preliminary experiments on applying Adapt for BERT [1]. The baseline is p-tuning v2 [2]. Consistent with the CLIP results, we observed a performance gain compared to p-tuning v2. The performance comparison is shown in the Table below. Details are reported in Appendix A.14.\\n\\n| Method | COPA # param | COPA Acc | BoolQ # param | BoolQ Acc | RTE # param | RTE Acc |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| P-Tuning v2 | 0.787 M | 78.00 | 1.968 M | 75.02 | 0.985 M | 78.17 |\\n| Adapt | 0.297 M | 80.00 | 0.297 M | 76.50 | 0.297 M | 79.17 |\\n\\n[1] Kenton, Jacob Devlin Ming-Wei Chang, and Lee Kristina Toutanova. \\\"Bert: Pre-training of deep bidirectional transformers for language understanding.\\\" Proceedings of naacL-HLT. Vol. 1. 2019.\\n\\n[2] Liu, Xiao, et al. \\\"P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks.\\\" arXiv preprint arXiv:2110.07602 (2021).\"}",
"{\"summary\": \"This paper proposes a deep continuous prompting method dubbed Adapt that encourages heterogeneous context lengths.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"-The paper is well-written.\\n\\n-Extensive experiments on 11 downstream datasets reveal the advantage of Adapt.\\n\\n-Adding mask to the prompts of different depth is an interesting idea.\", \"weaknesses\": \"Adding learnable mask to the prompts of different depth is an interesting idea. But, existing methods [1] proposed to add learnable mask to the parameters of CLIP. Adding learnable mask to parameters and add learnable mask to prompt have similar methods. Moreover, this paper did not discuss the difference between ADAPT and [1], which miss this key reference.\\n\\n[1] Regularized Mask Tuning: Uncovering Hidden Knowledge in Pre-trained Vision-Language Models, ICCV 2023\", \"questions\": \"-The hyperparameter T_target controls sparsity of masks. According to Table 2, the model reaches better averaged performance when T_target is set to a larger value (the masks are less sparse). What if T_target is set to a value larger than 128? What is the upper bound of the proposed method?\\n\\n-Ablations on prompt depth and context length should be conducted. \\n\\n-To demonstrate the effectiveness of the proposed method on few-shot classification tasks, the paper should provide results on 1/2/4/8-shot training setting, similar to those reported in CoOP and other related studies.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Reviewer qkws\", \"comment\": \"I thank the authors for their response.\\n\\nSome concerns have been addressed. I appreciate this work, but I still believe that the core idea is similar to [1], these two methods both add Binary Mask to some parameters (e.g., model weights or prompt weights). \\n\\nAdditionally, the ADAPT does not show the advantages in few-shot learning tasks (1/2/4/8). As the number of available images decreases (from 16 shot to 1 shot), the performance advantage of this method becomes inferior to existing methods (e.g., MaPLe\\\\MaPLe\\\\LAMM). This indicates that the method is highly dependent on the amount of data (more than 8 shots) and is not very robust.\\n\\nRegarding method innovation and robustness, I think there is still some room for improvement in this work. I will keep my original score of 3.\"}",
"{\"comment\": \"Thanks the responses from the authors. I raise the score to 6.\"}",
"{\"comment\": \"We thank the reviewer for the time and effort spent in evaluating our work. As the author-reviewer discussion phase is ending soon, we would like to kindly ask if the concern has been addressed with our latest response (https://openreview.net/forum?id=1L9vdc7BB5¬eId=PVCF08vTPX)?\"}",
"{\"comment\": \"Thanks the responses from the authors.\", \"weakness_2\": \"I recommend including this content directly within the PDF after the main paper, rather than placing it in the \\\"Supplementary Material\\\" section. This change could improve the overall readability and accessibility of the paper.\", \"weakness_7\": \"Thank you for the detailed review of dynamic neural networks. I now understand that the proposed method differs from traditional neural networks. However, I am curious whether the proposed method has any relevance to dynamic sparse training [1], which is also designed to address the over-parameterization problem. Additionally, I still believe the literature review in this paper could be further improved.\\n\\n[1] Chasing Sparsity in Vision Transformers: An End-to-End Exploration.\"}",
"{\"comment\": \"Dear Reviewer bqgw,\\n\\nCould you kindly review the rebuttal thoroughly and let us know whether the authors have adequately addressed the issues raised or if you have any further questions.\\n\\nBest,\\n\\nAC of Submission8374\"}",
"{\"comment\": \"Thank you for the response. We are glad that most of your concerns were addressed. Regarding the adaptive $T_{target}$ comment, we want to emphasize that our method already has the best average performance than all baselines even using fixed $T_{target}$, as shown in Table 1 in the manuscript. In the rebuttal, we show that by allowing the $T_{target}$ value to be adaptive for each dataset, the performance can be further improved because of the additional design choices in our search algorithm. This new result should not be misinterpreted as the ineffectiveness of the fixed-value result, given that the search spaces are different. One search space has the constraint that the total number of context lengths is fixed over all datasets while the other one does not have such constraint\"}",
"{\"comment\": \"Dear Reviewer qkws,\\n\\nCould you kindly review the rebuttal thoroughly and let us know whether the authors have adequately addressed the issues raised or if you have any further questions.\\n\\nBest,\\n\\nAC of Submission8374\"}",
"{\"comment\": \"We respectfully disagree that our method has low novelty due to the similar notion of \\\"binary mask\\\" in [1]\\\". We argue that making this claim is like saying many works in binary mask pruning/learning have no novelty because of the similarity to dropout, for which we hope the reviewer would disagree. The differences compared to baseline methods are listed in the reply (https://openreview.net/forum?id=1L9vdc7BB5¬eId=z00GyztQeQ).\\n\\nAs evidence, we find highly impact papers using binary masks:\\n\\n- [2] (ICLR 2019) uses binary masks and proposes Lottery Ticket Hypothesis (LTH).\\n- [3] (ICLR 2023) uses binary masks through the lens of Ramanujan Graph.\\n- [4] (ICML 2022) uses binary masks based on LTH.\\n- [5] (JMLR 2021) uses binary masks in the sparse training.\\n- [6] (ICLR 2023) uses binary masks in large language models.\\n- [7] (NIPS 2020) uses binary masks. The pruning process is based on the proposed criteria.\\n- [8] (ICLR 2024) uses binary masks to accelerate pre-training process of large language models.\\n\\nReviews/Surveys on network pruning and spare training [A1-A4] list a variety of papers using binary masks. We notice that none of them explore the application of network pruning in prompts. Directly applying element-wise multiplication will cause issues in prompting method as it generates zero embeddings in prompts. Please refer to our experiments in Appendix Table 11.\\n\\n[1] Zheng, Kecheng, et al. \\\"Regularized mask tuning: Uncovering hidden knowledge in pre-trained vision-language models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[2] Frankle, Jonathan, and Michael Carbin. \\\"The lottery ticket hypothesis: Finding sparse, trainable neural networks.\\\" ICLR 2019 (2019).\\n\\n[3] Hoang, Duc NM, and Shiwei Liu. \\\"Revisiting pruning at initialization through the lens of ramanujan graph.\\\" ICLR 2023 (2023).\\n\\n[4] Pal, Bithika, et al. \\\"A study on the ramanujan graph property of winning lottery tickets.\\\" International Conference on Machine Learning. PMLR, 2022.\\n\\n[5] Hoefler, Torsten, et al. \\\"Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks.\\\" Journal of Machine Learning Research 22.241 (2021): 1-124.\\n\\n[6] Sun, Mingjie, et al. \\\"A simple and effective pruning approach for large language models.\\\" arXiv preprint arXiv:2306.11695 (2023).\\n[7] Tanaka, Hidenori, et al. \\\"Pruning neural networks without any data by iteratively conserving synaptic flow.\\\" Advances in neural information processing systems 33 (2020): 6377-6389.\\n\\n[8] Xia, Mengzhou, et al. \\\"Sheared llama: Accelerating language model pre-training via structured pruning.\\\" arXiv preprint arXiv:2310.06694 (2023).\\n\\n[A1] Cheng, Hongrong, Miao Zhang, and Javen Qinfeng Shi. \\\"A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2024).\\n\\n[A2] He, Yang, and Lingao Xiao. \\\"Structured pruning for deep convolutional neural networks: A survey.\\\" IEEE transactions on pattern analysis and machine intelligence (2023).\\n\\n[A3] Fedus, William, Jeff Dean, and Barret Zoph. \\\"A review of sparse expert models in deep learning.\\\" arXiv preprint arXiv:2209.01667 (2022).\\n\\n[A4] Qiao, Lin-bo, et al. \\\"A systematic review of structured sparse learning.\\\" Frontiers of Information Technology & Electronic Engineering 18.4 (2017): 445-463.\"}",
"{\"comment\": \"Dear reviewer,\\n\\nWe appreciate the time and efforts the reviewer dedicated to evaluating our work. As the deadline for the author-reviewer discussion is closing soon but we still have not received any feedback, we kindly ask if the reviewer could review our responses and let us know if any additional clarifications or modifications are required from our side.\\n\\nAuthors\"}",
"{\"comment\": \"- Question 2: How to ensure stability? Are there safeguards?\\n\\nEven using fixed $T_{target}$, our method can surpass the baseline methods. Further, we can use the validation accuracy to determine the optimal $T_{target}$ for each dataset. The performance of using adaptive $T_{target}$ is reported in Appendix A.9.\\n\\nWhen we applied pruning, only inserted soft prompts were pruned while the original embeddings were not pruned. Taking an extreme case as an example, if all soft prompts at a certain depth are pruned, that indicates the prompt depth decreases by one.\\n\\n- Question 3: Potential reasons for the inconsistency of the performance\\n\\nWe noticed that there is no method that performs uniformly well across all datasets. To be fair to the baselines, we used the same hyperparameters including $T_{target}$ for all datasets. However, as shown in Table 2, when there is a relatively large distribution shift (e.g. EuroSAT dataset contains satellite images), a larger $T_{target}$ can lead to a better performance. When the distribution shift is small (e.g. Caltech101 dataset contains generic images), a smaller $T_{target}$ leads to better performance. Hence, one potential reason causing the different ranks on different datasets is that different datasets, depending on how different they are from the pre-trained dataset, might require different prompt complexity. We also included an adaptive selection method of Adapt in Appendix A.9 to select the best $T_{target}$ for each dataset. The results show that the average accuracy can be improved compared to using Adpat with a universal and fixed $T_{target}$. To make a fair comparison, we use the $T_{target}$ to apply the constraint on the prompt complexity.\\n\\nRank results; We use a fixed $T_{target}$, not an optimal hyperparameter. The reason we use fixed hyperparameters is to make a fair comparison. \\n\\n- Question 4: synchronization between two branches and potential issue in multimodal understanding\\n\\nWe would like to point out that we are not pruning the original tokens but only the additional inserted context tokens. Thus, the original tokens are not pruned. The inserted tokens are trained such that the overall loss of the model on the given task is reduced. Thus, even when we prune the number of additional tokens, the loss of the model is still reduced indicating that its performance is not hampered but only improved. We would also like to point out that some works (e.g. CoCoOp [5] and ProGrad [6]) only insert prompts to text branches, which can boost the performance of the pre-trained model.\\n\\n[1] Chen, Guangyi, et al. \\\"Plot: Prompt learning with optimal transport for vision-language models.\\\"International Conference on Learning Representations. 2023.\\n\\n[2] Gao, Jingsheng, et al. \\\"LAMM: Label Alignment for Multi-Modal Prompt Learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 3. 2024.\\n\\n[3] Khattak, Muhammad Uzair, et al. \\\"Maple: Multi-modal prompt learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[4] Zang, Yuhang, et al. \\\"Unified vision and language prompt learning.\\\" arXiv preprint arXiv:2210.07225 (2022).\\n\\n[5] Zhou, Kaiyang, et al. \\\"Conditional prompt learning for vision-language models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[6] Zhu, Beier, et al. \\\"Prompt-aligned gradient for prompt tuning.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\"}",
"{\"comment\": \"Thanks for the responses. The rebuttal shows that $T_{target}$ should be set to different values on different datasets. This suggests that automating the selection of $T_{target}$ would be a valuable improvement. Additionally, the performance of ADAPT in other training setting is lower than that of the state-of-the-art methods. Therefore, I maintain my initial score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"The authors thank the reviewer for the feedback. Here is one-to-one response:\\n\\n- Weakness 1: Originally, we set a fixed $T_{target}$ to ensure that on different datasets, Adapt has a similar model complexity and is fair to the baselines. The reviewer is correct that by making $T_{target}$ adaptive to each dataset, the performance can be further improved. To demonstrate this point, we consider the setting allowing the dataset-dependent $T_{target}$. The details are reported in Appendix A.9, the performance comparison is:\\n\\n| Method | Caltech101 | DTD | EuroSAT | Aircraft | Food101 | Flowers | Pets | Cars | Sun | UCF | ImageNet | Average |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Adapt | 95.63 | 72.03 | 92.53 | 50.93 | 83.47 | 97.97 | 91.07 | 86.17 | 73.67 | 84.40 | 70.83 | 81.70 | \\n| Adapt (adaptive $T_{target}$) | 96.17 | 72.17 | 92.60 | 52.07 | 87.03 | 98.40 | 92.47 | 86.70 | 75.33 | 84.40 | 72.07 | 82.67 |\\n\\n- Weakness 2: What the reviewer points out is the uniqueness of the Adapt method that introduces the heterogeneous context lengths. We analyzed the context lengths on different datasets. The results are added in Appendix A.8. We noticed that for datasets that have a larger degree of out-of-distribution, there will be more context tokens added to the image branch.\\n\\n- Weakness 3: We added results using 1/2/4/8-shot. We use the validation dataset to determine the optimal $T_{target}$ named Adapt (Adaptive $T_{target}$). Details regarding Adapt (Adaptive $T_{target}$) are shown in Appendix A.9 and the result of 1/2/4/8/16-shot is shown in Appendix A.10.\\n\\n- Weakness 4: We wanted to include UPT work for the comparison, but the code is not included in the repo [1]. We implemented UPT [2] on our own. Below is the performance we got:\\n\\n| Method | Caltech101 | DTD | EuroSAT | Aircraft | Food101 | Flowers | Pets | Cars | Sun | UCF | ImageNet | Average |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| UPT | 93.53 | 64.17 | 84.13 | 34.73 | 76.33 | 95.47 | 90.00 | 74.93 | 75.83 | 78.50 | 70.47 | 76.19 |\\n| Adapt | 95.63 | 72.03 | 92.53 | 50.93 | 83.47 | 97.97 | 91.07 | 86.17 | 73.67 | 84.40 | 70.83 | 81.70 |\\n\\nWe have added the comparison in Appendix A.12 and cited UPT work in the updated version.\\n\\n[1] https://github.com/yuhangzang/UPT\\n\\n[2] Zang, Yuhang, et al. \\\"Unified vision and language prompt learning.\\\" arXiv preprint arXiv:2210.07225 (2022).\"}"
]
} |
1L52bHEL5d | Test-Time Adaptation for Combating Missing Modalities in Egocentric Videos | [
"Merey Ramazanova",
"Alejandro Pardo",
"Bernard Ghanem",
"Motasem Alfarra"
] | Understanding videos that contain multiple modalities is crucial, especially in egocentric videos, where combining various sensory inputs significantly improves tasks like action recognition and moment localization. However, real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues. Current methods, while effective, often necessitate retraining the model entirely to handle missing modalities, making them computationally intensive, particularly with large training datasets. In this study, we propose a novel approach to address this issue at test time without requiring retraining. We frame the problem as a test-time adaptation task, where the model adjusts to the available unlabeled data at test time. Our method, MiDl~(Mutual information with self-Distillation), encourages the model to be insensitive to the specific modality source present during testing by minimizing the mutual information between the prediction and the available modality. Additionally, we incorporate self-distillation to maintain the model's original performance when both modalities are available. MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time. Through experiments with various pretrained models and datasets, MiDl demonstrates substantial performance improvement without the need for retraining. | [
"missing modality",
"test-time adaptation"
] | Accept (Poster) | https://openreview.net/pdf?id=1L52bHEL5d | https://openreview.net/forum?id=1L52bHEL5d | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xVqH142uXb",
"rnHnCn6uH3",
"pFMzm8zBZs",
"nDErpWYkpK",
"cv5bARULb9",
"aQyM10Ww5G",
"WGevhaOgup",
"TOIEacmOjU",
"PyNXQMnNro",
"PeFHya5yKy",
"ODNHDkAc8Q",
"O2sp78lSAW",
"JEOy8KW7p9",
"FhwhfoGs6v",
"CsCD3Llmhp",
"8lgtN8uxal",
"7xPtAgwjmj",
"7rhywWNyx8",
"40PtzHSI9T",
"1UqKNW8F7C",
"09iEQj4e3E"
],
"note_type": [
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730654493199,
1730864505480,
1732536950984,
1732223612976,
1732620035555,
1732780398383,
1730666852582,
1732224627229,
1732880088149,
1734756317097,
1732224189161,
1732480781987,
1733175417384,
1733175391576,
1732732129393,
1730706388171,
1737523749055,
1732224065731,
1732690561335,
1732224642411,
1732224955272
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_wPGf"
],
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_6iPc"
],
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_wPGf"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_CsYs"
],
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_wPGf"
],
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_M6ux"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Area_Chair_Khux"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_CsYs"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Reviewer_M6ux"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6184/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"In this work, the authors focus on an important task which is test time adaptation for egocentric video action recognition under missing modality. The authors validate existing work of TTA on this new task and propose a new method MiD1 to enhance the robustness of the learned features. The performance of the proposed method is evaluated on the EpicKitchen sound and video dataset\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Missing modality issue is important for test time adaptation of ego centric action recognition. This task will contribute to the community.\\n\\n2. Method section is clearly written and easy to follow.\\n\\n3. Compared with the baseline, the proposed approach show good performance on this new task.\", \"weaknesses\": \"1. Lack of the comparison with other approaches specifically targeted at missing modality issue.\\n\\na. Dai, Y., Chen, H., Du, J., Wang, R., Chen, S., Wang, H., & Lee, C. H. (2024). A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 27445-27455).\\n\\nb. Lee, H. C., Lin, C. Y., Hsu, P. C., & Hsu, W. H. (2019, May). Audio feature generation for missing modality problem in video action recognition. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 3956-3960). IEEE.\\n\\nc. Wang, H., Chen, Y., Ma, C., Avery, J., Hull, L., & Carneiro, G. (2023). Multi-modal learning with missing modality via shared-specific feature modelling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 15878-15887).\\n\\n\\n2. The authors are suggested to enlarge the benchmarks. I aggree that this task is an important task, however the experiments are limited in this paper which will be harmful to its soundness. The authors could enrich the benchmark using more existing TTA approaches, e.g., d,e,f, and g, and try to provide an analysis on the performance on different cluster of approaches. Missing modality works can also serve as good baselines to enrich the benchmark.\\n\\nd. Chen, D., Wang, D., Darrell, T., & Ebrahimi, S. (2022). Contrastive test-time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 295-305).\\n\\ne. Wang, D., Shelhamer, E., Liu, S., Olshausen, B., & Darrell, T. (2020). Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726.\\n\\nf. Yuan, L., Xie, B., & Li, S. (2023). Robust test-time adaptation in dynamic scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 15922-15932).\\n\\ng. Niu, S., Wu, J., Zhang, Y., Chen, Y., Zheng, S., Zhao, P., & Tan, M. (2022, June). Efficient test-time model adaptation without forgetting. In International conference on machine learning (pp. 16888-16905). PMLR.\\n\\n3. No qualitative reustls. The authors are suggested to provide some qualitative results when comparing their approach with the baseline approach. Some failure case analysis will be helpful.\\n\\n4. The performance of the proposed approach is only verified on EPIC Kitchen, generaliyability to other dataset can be an issue.\\n\\n5. TSNE visualization on the latent space will be helpful to see how the proposed supervision help during the feature learning procedure. The authors could visualize the changes for different epoches compared with its baseline.\", \"questions\": \"1. Could the authors include comparisons with other approaches specifically addressing the missing modality issue, such as those proposed by Dai et al. (2024), Lee et al. (2019), and Wang et al. (2023)?\\n\\n2. Given the importance of this task, would the authors consider expanding the benchmark by including more test-time adaptation (TTA) approaches, such as Chen et al. (2022), Wang et al. (2020), Yuan et al. (2023), and Niu et al. (2022), and analyzing performance across different clusters of approaches? Including missing modality approaches as baselines may also strengthen the benchmark.\\n\\n3. Could the authors provide qualitative results comparing their approach to the baseline, along with a failure case analysis to offer insights into scenarios where the method may fall short?\\n\\n4. Has the generalizability of the proposed approach been tested on datasets beyond EPIC Kitchen? If not, would the authors consider verifying the performance on additional datasets?\\n\\n5. Could the authors use TSNE visualization on the latent space to illustrate how the proposed supervision affects feature learning? Specifically, visualizing changes over different epochs in comparison to the baseline might provide additional insights.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"To tackle the issue of modality missing in real-time tasks, this framework offers an online self-supervised learning method called MiDl. MiDl uses mutual information and KL divergence as loss functions to optimize the model in real time, enabling the baseline model to better handle inputs with missing modalities.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper redefines the modality missing problem as a test-time adaptation (TTA) issue, emphasizing the challenges of modality absence faced in online multimodal tasks. This is indeed an urgent problem that needs to be addressed for many online multimodal tasks.\\n\\nThe proposed MiDl method effectively enhances the baseline model's ability to handle missing modalities, serving as a solution for the modality missing problem in multimodal online tasks. This approach can act as a supplement when facing modality absence in such tasks. For instance, if modalities are functioning normally, this pipeline may not be used; however, when a modality is missing, the proposed solution can improve the baseline model's capability to handle the missing modalities. Additionally, normal inputs and prediction results can serve as supplementary information when modalities are insufficient.\\n\\nThe experiments presented in the paper are comprehensive, demonstrating that the \\nmethod is independent of modality selection, baseline models, and model frameworks, thereby proving the robustness of the proposed solution.\", \"weaknesses\": \"1. \\\"First, the prediction of should be invariant to the modality source . Ideally, f_{\\\\theta} should output the same prediction under both complete and incomplete modality, hence satisfying the following equality: (i)...,\\\" The underlying assumption of the approach is controversial. The task will degenerate into a modality distillation problem if this assumption holds,. Is there a more reasonable way to phrase this?\\n2. Implementing this method in real-world production could introduce significant computational overhead and latency. Normal models can be accelerated through techniques like compression and distillation, but this approach involves updating model weights, requiring the retention of the complete model, making it difficult to deploy directly in practice.\\n3. Could you include experiments demonstrating the approach's decision-making in more complex online scenarios? The experiments provided in the paper do not represent the best use case for this method; its most suitable application is in online scenarios, so experiments in these contexts would better support the results.\", \"questions\": \"1. If this approach faces extreme examples, such as a video showing a calm street while the audio is an explosion, will this model mislead the baseline model into the wrong direction?\\n2. You might consider adding extra blocks to the model, so that if updates are needed, only the added portions need to be updated. Alternatively, updating part of the model's structure could prevent the significant latency introduced by updating the entire system.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to the rebuttal\", \"comment\": \"Dear Authors,\\n\\nI am not satisfied with your answer regarding the generalizability of your approach beyond EPIC-Kitchen.\\n\\nThe two datasets you mentioned are all from EPIC-Kitchen but in different modalities.\\n\\nSome other datasets can be also leveraged to conduct the experiments and validate the generalizability of your proposed method, e.g., Ego4D dataset (or other dataset) [1].\\n\\n[1] Grauman, K., Westbury, A., Byrne, E., Chavis, Z., Furnari, A., Girdhar, R., ... & Malik, J. (2022). Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 18995-19012).\\n\\nI think the task proposed by the authors are interesting, but the experiments are only conducted on EPIC-Kitchen which may limits the contribution.\\n\\nThereby I will keep my score as 5 based on the current response.\"}",
"{\"title\": \"General reply\", \"comment\": \"We sincerely thank the reviewers for their insightful feedback and recognition of the strengths of our work. We are encouraged by the positive reception and appreciate the reviewers highlighting several key aspects of our contributions:\\n\\n\\n\\n\\n**Novelty of Problem Formulation**\\n\\n\\nWe are grateful that the reviewers acknowledged our redefinition of the missing modality problem as a test-time adaptation (TTA) challenge. This novel formulation **eliminates the need for retraining** and provides a practical solution for multimodal online tasks. Reviewer 6iPc emphasized the urgency of addressing this challenge in online settings, while Reviewer CsYs noted its potential to inspire further exploration in the research community.\\n\\n\\n**Effectiveness and Generalizability of MiDl**\\n\\n\\nWe appreciate the recognition of MiDl's adaptability and robust performance across different modalities, baseline models, and frameworks. Reviewer 6iPc commended MiDl as an effective solution for missing modalities, while Reviewer wPGf noted its strong performance compared to baselines. Reviewer CsYs highlighted the **comprehensive experimental evaluation**, demonstrating MiDl's applicability to various scenarios.\\n\\n\\n**Clarity and Motivated Presentation**\\n\\n\\nWe are pleased that the reviewers found the manuscript to be clear and well-organized. Reviewer CsYs appreciated the **strong motivation** behind our method and its intuitive design. Reviewer wPGf also noted that the method section is easy to follow, which aligns with our goal of presenting a practical and accessible solution. Additionally, Reviewer M6ux highlighted that Section 3 (\\\"Missing Modality as Test-Time Adaptation\\\") is well-written and easily comprehensible. \\n\\n\\n\\n\\n**Comprehensive Experiments and Insights**\\n\\n\\nThe **thoroughness of our experiments was highlighted by multiple reviewers**. Reviewer M6ux commended the robustness of our experimental setup, including repeated trials with standard deviation reporting, as well as the significance of findings such as the Ego4D warm-up results at a 100% missing rate. Reviewer 6iPc also pointed out that the experiments effectively demonstrate the robustness of the proposed method, as it is independent of modality selection, baseline models, and model frameworks. Reviewer CsYs also acknowledged our extensive analysis and benchmarking of prior TTA methods such as SHOT and TENT, which provide valuable insights into the formulated task.\\n\\n\\n\\n\\n**Relevance and Broader Impact**\\n\\n\\nWe are encouraged that the **reviewers recognize the broader impact of our work** on the research community. Reviewer wPGf highlighted the importance of the missing modality issue for ego-centric action recognition, while Reviewer CsYs noted that our method offers a foundation for subsequent discussions and developments in this area.\\n\\n\\n\\n\\nWe deeply appreciate the reviewers' constructive feedback and their acknowledgment of the strengths of our work. These insights will help us further refine our manuscript and reinforce its contribution to the field. Given the positive reception of the paper and its potential for future research, we are committed to releasing the code before the rebuttal period ends. We are actively working on it.\"}",
"{\"title\": \"Response to Rebuttal\", \"comment\": \"I thank the authors for their effort in answering my doubts and concerns. I would maintain my rating at the moment. I would like to see if the code can be released shortly to check for reproducibility.\"}",
"{\"title\": \"Response\", \"comment\": \"Dear authors,\\n\\n\\nThank you for your detailed response. My concern is well solved by the latest reponse from the authors and I will increase my rating to 6.\\n\\nBest,\\n\\nyour reviewer.\"}",
"{\"summary\": \"This paper presents a novel approach to handling missing modalities in multimodal learning using test-time adaptation. The method, MiDl, shows promising results across the EPIC kitchen and EPIC sounds datasets, and the method is motivated by the theoretical intuition of minimizing the mutual information between the predicted and available modality. The authors also provide some interesting analysis of the model through long-term adaptation, out-of-distribution warmup, and various ablation experiments.\\n\\nThis review follows the sections of the paper.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Introduction:\\n1. The second and third paragraphs effectively identify the gap in the literature and provide a robust overview of the proposed solution.\", \"related_works\": \"2. Related works is concise and relevant\", \"missing_modality_as_test_time_adaptation\": \"3. This section is well-written and easily comprehensible.\", \"experiments\": \"6. Additional details could enhance the reproducibility of this work. Was any hyperparameter tuning conducted for MiDl? Section B.1 mentions the recommended hyperparameters for the baseline but doesn't specify how they were determined for MiDl. Moreover, what were the proportions of the train/val/test split?\\n7. In section 5.3 LTA: You allow the model to retrain on some of the unlabeled training data. Why not gather $S_{in}$ from the validation set? In this setting, is MiDl trained on $D \\\\ S_{in}$? Or is $S_{in}$ still included in the labeled dataset before test time and then used without labels during test time?\\n8. Can this method be applied to instances where either modality is missing, e.g., P = {.3,.3,.3}? It would be great to see results for experiment with such a ratio. Currently, it may be the case that the model learns to leverage ONLY the modality that is consistently present in both the complete modality test case and the missing modality test case. In this scenario, would a given unimodal model for the always-present modality perform optimally? Table 1 could be improved by clarifying what is meant by \\\"Unimodal\\\" and why it is only present at the 50% missing rate. For Epic Sounds, is the unimodal the always-present modality (video)?\", \"analysis_on_midl\": \"9. Both architecture choices are transformer-based. It would have been more convincing to see a greater diversity of architectures (such as a convolution backbone). Instead of presenting different missing rates as columns in Table 3, it would have been preferable to see different architectures/methods as the columns with a fixed missing rate (perhaps 50%).\\n10. Given that the main motivation was to avoid retraining an existing method on a large dataset to perform missing modality adaptation, the results would have been more convincing if the authors had either used an existing model+dataset and just performed adaptation, as opposed to training from scratch an existing method. Alternatively, they could have tested with a very large dataset that was computationally expensive. The omnivore pretraining test is good. Did you train the model from scratch on your dataset or use an existing model and apply MiDl?\\n11. In Table 6, shouldn't 55.2 in the Dl column be bolded?\\n12. I thought the motivation was that retraining on the train set is computationally expensive, and TTA will prevent that? It's good that you acknowledge the computational requirements of MiDl, but then in the abstract, you shouldn't state: \\\"Current methods, while effective, often necessitate retraining the model... making them computationally intensive.\\\" Alternatively, compare your inference computation here with the amount of computation required to retrain training data (to get sufficient performance).\", \"weaknesses\": \"Introduction:\\n1. The motivation presented in the first paragraph is weak. For instance, in Line 40, could you provide an example or application where inference must occur on a redacted modality? I can think of the application of blurring faces in images or deleting private details in medical records, but it's unclear when only one modality would be completely removed for privacy reasons while the other remains intact for the same data instance. Additionally, the relevance of using cheaper modalities to missing modalities is not apparent (line 40). It would be particularly convincing if the motivation aligned with the tested dataset. For example, if using Epic Kitchens, perhaps a scenario involving a humanoid with a malfunctioning audio sensor, or smart glasses with an obscured camera due to steam or food spillage could be considered.\\n2. Contribution (3) appears to be describing MiDl, which is already covered in contribution (2). I would recommend reassessing what could constitute a third distinct contribution from your work.\\n3.Figure 1 requires improvement. The concept of a \\\"potential performance trajectory\\\" needs clarification - is this your hypothesis? This graphic would be more persuasive if it depicted your actual no-adaptation baseline and your TTA method. The purpose of the black line in the middle of the graph is unclear.\", \"proposed_solution\": \"4. The notation in eq (1) lacks precision. It is not evident that f(x;m) and m are random variables. The output of f is a distribution. Are you considering this as a random variable, with the value being the indices and the probability of those values the logits? Consider introducing a random variable Y ~ f(x;m). Also, consider using capital letter notation (e.g. \\\"M\\\") for the random variables. Furthermore, how can you evaluate the KL if x ~ S only has the modality m, not AV? Later in this section, it becomes apparent that you only update the model on complete instances. This limitation/assumption should be made clearer in the introduction or Takeaways subsection. This method would only be applicable for testing data that includes some multimodal instances.\\n5. At last line of page 4, is $x_t$ a single sample? Do you mean samples $x_0 \\\\dots x_t$?\", \"questions\": \"While the technical aspects and experimental results are generally strong, there are areas for improvement in the motivation, clarity of presentation, and some experimental details.\\n\\nI presented many questions and suggestions in the weaknesses suggestions. In particular, I would suggest the authors focus on the concerns about the motivation and the experiments aligning with that motivation. My comments regarding notation and small fixes are merely suggestions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Responses to Reviewer M6ux's Comments and Suggestions\", \"comment\": \"**Introduction**\\n\\n**Motivation and Missing Modality Examples** \\nWe thank the reviewer for their thoughtful feedback and detailed suggestions. The necessity of addressing this problem can indeed be observed in practical scenarios such as the large-scale egocentric data collection in the Ego4D dataset (Grauman et al., 2022). In this dataset, while RGB video is consistently available, audio is absent in approximately 30% of the data due to privacy regulations in specific locations. \\nAdditionally, the reviewer\\u2019s suggestion about device failure or sensor deactivation is a valid and practical scenario. We also appreciate the reviewer\\u2019s point about the relevance of using cheaper modalities. This topic has been explored in recent literature (Grauman et al., 2023), where researchers focus on scenarios where models can infer information using fewer, less costly modalities to reduce energy consumption.\\n\\n\\n---\\n\\n**Contributions and Figures**\\n\\n**Contributions and Figure 1 Improvements** \\nWe thank the reviewer for their suggestions regarding the figures and contributions. We have updated Figure 1 and the contributions statement accordingly in the manuscript. To clarify, the \\\"potential performance trajectory\\\" in the original figure was intended to represent the desired outcome of a successful adaptation strategy, serving as a conceptual illustration rather than being based on actual data. However, we agree that using our method's results instead makes the figure more impactful and better supports the message. We replace this plot with MiDl results on the Epic-Kitchens dataset.\\n\\n\\n---\\n\\n**Proposed Solution**\\n\\n**Notation and KL Divergence Clarifications** \\nWe thank the reviewer for raising these points.\\n\\n1. **Regarding Notation**: Indeed, $m$ is a discrete random variable sampled from $\\\\{A, V, AV\\\\}$, defined by probabilities $\\\\{p_A, p_V, p_{AV}\\\\}$, a property of the stream $\\\\mathcal{S}$. For instance, $p_{AV} = 1$ implies $\\\\mathcal{S}$ always reveals complete modalities. Sampling $x \\\\sim \\\\mathcal{S}$ equips $x$ with $m = M$. We are happy to revise the notation further if necessary. \\n\\n2. **Regarding KL Divergence**: When $x$ is modal-complete, we define the KL divergence expectation over $m$ with $p_A = p_V = p_{AV} = \\\\frac{1}{3}$.\\n\\n3. **Regarding Updates**: As outlined in the interaction (gray box) in Section 4 (lines 262-263), MiDl adapts the model on the modal-complete data points while predicting without adaptation on other samples. Thus, MiDl produces predictions for every single data point regardless of its modality. As noted in lines 265-265, our work focuses on multimodal settings, and we assume $p_{AV} \\\\neq 0$. We have modified the Takeaways subsection of Section 4 to more explicitly highlight this assumption.\\n\\n4. **Clarification on x_t**: We apologize for any confusion regarding this notation. As clarified in lines 179\\u2013181, x_t refers to a sample or batch presented to the model at time step t. This does not imply all samples accumulated up to step t (i.e., x_0, x_1, \\\\ldots, x_t\\u200b); rather, it strictly refers to the data arriving at time step t alone.\\n\\n---\\n\\n**Experiments**\\n\\n**Reproducibility Enhancements** \\nWe appreciate the reviewer\\u2019s request for additional details.\\n\\n1. **Hyperparameter Tuning**: The implementation details for MiDl, including the selected hyperparameters, are provided in Section B.1. These hyperparameters were determined through a grid search to identify the optimal settings for the task. We will ensure that this clarification is made explicit in the manuscript. Additionally, our code release will further facilitate the reproducibility of these results. \\n\\n2. **Dataset Splits**: We adhered to the official train/val/test splits provided for the Epic-Kitchens and Epic-Sounds datasets. The approximate ratios for these splits are 75% for training, 10% for validation, and 15% for testing. We will revise the manuscript to explicitly state these proportions to avoid any ambiguity.\\n\\n**Clarification on LTA Setup** \\n\\nWe thank the reviewer for this thoughtful observation. To clarify, as outlined in Section 5.3, we reserve a subset of the training data for the Long-Term Adaptation (LTA) stage. The model observes labeled data from S_{in} prior to test time (during training), but we do not use any labels\\u2014whether from S_{in} or elsewhere\\u2014during the adaptation phase. This design simulates a practical scenario where a portion of training data can be stored and utilized for adaptation at test time without relying on labels.\\nWe do not use validation or test data for LTA because our assumption is that data arrives as a stream at test time, requiring immediate predictions. While our current setup reflects this realistic streaming assumption, in practical scenarios, one could envision access to test data in advance, allowing for storage of unlabeled data for long-term adaptation. This flexibility could further enhance the applicability of MiDl in various deployment settings.\\n\\n---\"}",
"{\"title\": \"Updates on the code release\", \"comment\": \"We sincerely thank the reviewers for their patience regarding the code release. We are pleased to inform you that we have updated the manuscript to include an anonymous link to the code, which you can access [here](https://anonymous.4open.science/r/midl_tta-2E36/). The repository includes a detailed README file with all the necessary details for running the code. Upon acceptance, we are fully committed to making this repository publicly available.\"}",
"{\"metareview\": \"The rebuttal provided clarifications about the proposed method and its analysis that were useful for assessing the paper's contribution and responded adequately to most reviewer concerns. All reviewers recommend acceptance after discussion (with four marginally above the acceptance threshold), and the ACs concur. The final version should include all reviewer comments, suggestions, and additional clarifications from the rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"NA\"}",
"{\"title\": \"Responses to Reviewer CsYs's Comments and Suggestions\", \"comment\": \"**On multimodal pretraining requirements**\\n\\nThank you for raising this question. We assume that the model accepts multimodal inputs at test time and has been trained using multimodal data to ensure compatibility with the test-time scenario. If a unimodal model were used at test time, it would lack the capability to leverage the full set of modalities present in the multimodal inputs, thereby limiting its performance and effectiveness in such scenarios. \\nTo address the scenario you mentioned, where a unimodal model (e.g., a video model) is used for multimodal data, a finetuning stage would typically be required. For instance, the audio backbone could be initialized with the weights from video pretraining and then finetuned on audiovisual data. However, in our method, we do not perform any finetuning. Instead, we assume that the models have already been trained on multimodal data, which aligns with the scope and assumptions of our approach. \\n\\n---\\n\\n\\n**On mitigating performance drops at 100% missing modality** \\n\\nThank you for your insightful comments. Adapting a model to a fully unimodal stream (100% missing rate) at test time is indeed a challenging scenario, particularly without labeled data. In this extreme case, MiDl neither degrades nor improves baseline performance, maintaining the integrity of the original multimodal model. \\nWhile methods like SHOT provide slight improvements over the baseline under a 100% missing ratio, they exhibit significantly lower performance compared to MiDl when some multimodal samples are available. MiDl is designed with the assumption that some presence of complete modalities at test time is necessary for effective adaptation, which aligns with the typical expectations for multimodal models. \\nWe view it as a strength of MiDl that it avoids degrading the original model\\u2019s performance in this extreme case, rather than a limitation. Moreover, we highlight that MiDl demonstrates a significant performance boost during long-term adaptation, even if the test stream becomes unimodal over time (see Table 2 for detailed results). \\n\\n\\n---\\n\\n\\n**On expandability to more modalities** \\n\\nWe thank the reviewer for this insightful comment. MiDl does not impose any inherent limitations on the number of modalities; the scalability depends on the capabilities of the base model used. In Section 3, we formulated our approach m \\u2208 {A, V, AV} to align with our experiments on audiovisual egocentric datasets. However, the underlying problem and methodology can naturally extend to any number of modalities. \\nSimilarly, MiDl is designed to work seamlessly with an arbitrary number of modalities. The formulations in Equation 1 and Equation 2 can be easily generalized by replacing AV with combinations of additional multimodal inputs, enabling broader applicability beyond the audiovisual setup presented in this work. \\n\\n---\\n\\n\\n**On dropping the secondary modality** \\n\\nWe thank the reviewer for this comment. We report results for scenarios where the secondary modality is dropped in Section 6.2 and Table 4 of the main manuscript. Specifically, we present results for Epic Sounds when the video modality is dropped and for Epic Kitchens when the audio modality is dropped. These experiments demonstrate the robustness of our method across different modalities under varying missing probabilities. \\n\\n---\\n\\n\\n**On reproducibility of results** \\n\\nWe are working on a code release, please stay tuned for future replies. We are committed to submit it before the discussion period ends.\"}",
"{\"title\": \"Qualitative Results Update\", \"comment\": \"Dear Reviewer wPGf,\\n\\nIn response to your request for qualitative results, we have added Figures 3 and 4 in Section D of the appendix (please refer to the updated version of the PDF). These figures compare our approach with the base model and include a failure case analysis. This addition provides valuable insights into the strengths of our method and identifies scenarios where it may face limitations, directly addressing your feedback.\\nWe appreciate your thoughtful suggestion, as incorporating these results has enhanced both the presentation and the overall impact of our paper\\u2019s findings.\"}",
"{\"comment\": \"Thank you for taking the time to thoughtfully review our rebuttal and for reconsidering your scores. We greatly appreciate your constructive feedback and recognition of our work's contributions. Your insights have been invaluable in improving the quality of our work.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for taking the time to thoughtfully review our rebuttal and for reconsidering your scores. We greatly appreciate your constructive feedback and recognition of our work's contributions. Your insights have been invaluable in improving the quality of our work.\"}",
"{\"title\": \"Updates on Dataset Diversity\", \"comment\": \"Thank you for your thoughtful feedback and for engaging deeply with our work. We truly appreciate your efforts to ensure that the contributions of our approach are robust and well-supported, and we welcome the opportunity to further clarify our choices and provide additional evidence of MiDl's generalizability.\\nAs requested by the reviewer, we conducted experiments with MiDl on Ego4D dataset. Although Ego4D does not currently provide an official action recognition benchmark, we adopt the approach of Ramazanova et al. (2023) and use their Ego4D-AR dataset. In our evaluation, **MiDl demonstrates consistent improvements over baseline methods while dealing with missing audio in Ego4D-AR**. For instance, at a 75% missing rate, MiDl achieves 2% performance gain over the baseline achieving an accuracy of 23.4%, outperforming the baseline (21.4%), TENT (15.9%), and SHOT (22.1%). **These findings are detailed in Table 12 in the PDF**. Notably, as Ego4D inherently features instances of missing audio, we conducted evaluations at 50%, 75%, and 100% missing rates.\\n\\n\\n| 1-p_{AV} (%) | 50 | 75 | 100 |\\n|--------------|--------|-------|-------|\\n| BASELINE | 26.2% | 21.4%| 16.6%|\\n| TENT | 23.3% | 15.9%| 9.3% |\\n| SHOT | 26.6% | 22.1%| **18.3%**|\\n| MIDL (ours) | **27.1%** | **23.4%**| 16.6%|\\n\\n\\n---\\n\\n\\nWe would also like to emphasize that EPIC-Kitchens (Damen et al., 2020) and EPIC-Sounds (Huh et al., 2023) are two distinct datasets with no overlap in their set of classes or recognition tasks. While both datasets originate from the same underlying collection of long-form videos recorded in kitchen environments, **each dataset comprises a distinct set of trimmed clips**. Moreover, their **annotations and tasks are different**: EPIC-Kitchens focuses on action recognition, while EPIC-Sounds is designed for audio/sound classification. That said, we acknowledge that both datasets are centered on kitchen activities, and Ego4D (Grauman et al., 2022) encompasses a broader range of daily activities.\\n\\n\\nWe hope this additional experiments and clarification sufficiently address your concerns regarding generalizability. Thank you again for your constructive feedback, which has motivated us to strengthen our work further.\"}",
"{\"summary\": \"This paper tackles on missing modalities in egocentric videos without the need to retrain models by formulating this challenge as a test-time adaptation task. The authors proposed MiDl which minimizes the mutual information between prediction and the modality, with the incorporation of self-distillation to maintain performance when all modalities are available. The author benchmarked several methods under such problem formulation, demonstrating a descent performance when part of the modality are missing in two egocentric video datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Overall, the paper is interesting and easy to follow.\", \"The formulation of the test-time adaptation for tackling missing modality without the need for model retraining is indeed novel and can be foreseen to be applied to various applications which contains multi-modal information.\", \"Although the method itself is not complex and consists of components already used for various tasks for multi-modal learning and egocentric video analysis, they are leveraged in MiDl with strong motivation which are intuitive and reasonable. The extended discussion also offers a deeper understanding over the formulation of MiDl. This method could prove to be a good starting point for subsequent discussions in the research community.\", \"The authors also provide a comprehensive analysis over the performance of MiDl in the formulated task, and also benchmarked previous methods such as SHOT, TENT under the same setting, which also provides a further insight into the challenges and possible methods to further tackle the task.\", \"In general, this paper is relatively well presented with a simple yet highly motivated method for an interesting formulation of a realistic challenge.\"], \"weaknesses\": \"There are a few minor concerns remaining in the paper, mainly on the clarity and possible extension in discussion of the proposed method. I would like the authors to consider the following concerns if possible:\\n1. On Page 4, Line 197-198, the author states that \\\"$f_\\\\theta$ should retain high performance in predicting data with complete modality, which is generally satisfied for $f_{\\\\theta_0}$\\\". Does this imply that the non-adapted pretrained model must be pretrained with all modalities available? What if the pretrained model is only trained with a single modality (e.g., only with visual information without the audio information which is rather common in video models)?\\n2. It is observed that there is a large drop when $1-p_{AV}=100$, where none of the data contain both modalities. What would be a possible approach to mitigate this drop in performance. It is observed that the drop for MiDl is significantly more severe than that of SHOT.\\n3. The current method only touches upon the case for two modalities (audio and video), is it expandable towards more modalities. Also, are there limitations for the possible types of modalities or it can be any modalities as long as they are obtained from the same set of data?\\n4. The experiments are performed for each dataset with a drop in the primary modality, what would be the result if the secondary modality is dropped with the same probability?\\n5. Lastly, the code is currently NOT available, which means that the reproducibility of the result is not verified.\", \"questions\": \"Please refer to the Weakenesses section. I highly encourage the authors to directly include the code for the verification of reproducibility.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Responses to Reviewer 6iPc\\u2019s Comments and Suggestions\", \"comment\": \"**On the assumption and phrasing regarding modality invariance**\\n\\n\\nWe thank the reviewer for raising this insightful point. We would like to clarify the distinction between our setup and the modality distillation problem, as they address fundamentally different challenges. In modality distillation, the primary goal is to transfer knowledge from a teacher model, which is usually trained with access to all modalities, to a smaller, typically unimodal student model. The focus is on training a student model to approximate the teacher\\u2019s performance despite having access to fewer (or weaker) modalities.\\n\\nIn contrast, our approach assumes that the model should inherently possess the capability to make consistent and accurate predictions, irrespective of the available modality or combination of modalities. For example, whether the model observes a silent video of a bird, hears the chirping sound alone, or has access to both, it should consistently recognize that it is observing a bird. This is not a process of distillation from one model to another but rather an effort to provide a single, modality-agnostic model that learns to generalize across different modality combinations.\\n\\n---\\n\\n**On computational overhead and latency in real-world applications**\\n\\n\\nWe thank the reviewer for highlighting this important consideration. We would like to emphasize that MiDl is agnostic to the base model, meaning it makes no assumptions about the model architecture or pre-training strategy. Consequently, our method can be applied even if the pre-trained model has already been compressed or distilled.\\n\\nWe acknowledge the computational cost associated with test-time updates, as noted in Section 6.5. This trade-off is a common consideration for any test-time adaptation methods and should be weighed against the performance benefits in real-world deployments.\\n\\n---\\n\\n**Extra experiments in online scenarios**\\n\\n\\nThank you for this valuable suggestion! We agree that addressing more complex online scenarios would be a compelling direction for further exploration. In fact, we followed the recent literature on online test-time adaptation (Alfarra et al., 2023) to define our stream setting, ensuring alignment with the current state-of-the-art. We are open to exploring additional scenarios that could showcase the applicability of our approach in even more complex online settings. Could you please elaborate on the specific scenarios or challenges you believe would better demonstrate the utility of our method? We would greatly appreciate your input.\\n\\n\\n---\\n\\n**On extreme examples with misaligned modalities**\\n\\n\\nWe appreciate the reviewer\\u2019s curiosity and for bringing up this interesting scenario. However, we would like to clarify that this situation is beyond the scope of our paper. The scenario you describe pertains to misalignment in multimodal data, which differs fundamentally from the missing modality problem that we address. In the missing modality problem, we are aware that a modality is absent, which may occur due to device malfunctions, sensor deactivation for privacy or efficiency, or similar reasons, as discussed in the introduction.\\n\\nIn the misalignment scenario you outlined, all modalities are still present but are not semantically aligned. If the misalignment is intentional, such as injecting incorrect inputs, it may fall under the category of adversarial attacks. This is outside the focus of our work, as we concentrate on scenarios where a modality is simply missing and not replaced with deliberately crafted or noisy inputs.\\n\\nEven in less extreme cases of natural misalignment\\u2014such as a TV playing unrelated sounds in the background while the video reflects what a person sees\\u2014this situation involves all modalities being available and presents a different setup to the one tackled by our work. Our focus remains on the challenges and solutions specific to missing modality scenarios.\\n\\n\\n---\\n\\n**On adding blocks or partial updates**\\n\\n\\nThank you for this interesting suggestion. Efficiency and latency are indeed critical considerations when applying test-time adaptation methods. As mentioned in Section B.1 (Lines 735\\u2013739 of previous document, or 751-755 on updated document) and Section B.3, our approach follows the prior line of work in test-time adaptation methods by only updating the normalization layers when applying MiDl, which reduces computational overhead.\\n\\nAdditionally, it is important to note that as we mentioned in Section 6.1, our approach is architecture-agnostic. This flexibility allows users to opt for a more lightweight architecture equipped with MiDl to tailor their specific application, thereby further addressing concerns around efficiency and latency.\"}",
"{\"comment\": \"The authors have done a fairly good job in their rebuttal, addressing my suggestions and making reasonable changes to the work. However, I remain unconvinced regarding the privacy-related motivation and the suggested computational benefits.\\n\\nWhile Ego4D redacted modalities due to privacy concerns, this was done for training purposes and model construction. It's unclear why a modality would be missing for privacy reasons when deploying the model in a real-world setting. This may be a misunderstanding or misphrasing in the introduction.\\n\\nRegarding computational efficiency, I think the authors misunderstood my original stated weakness. The work mentions that \\\"MiDl is 5x more expensive\\\" during testing. This raises the question: if the test set is more than one-fifth the size of the training set, wouldn't retraining a different model be faster than adapting with MiDl? I believe more analysis is needed to convincingly demonstrate that MiDl TTA is computationally superior to retraining.\\n\\nNevertheless, this is the first work I've encountered that explicitly explores missing modality TTA. Although there's room for expansion in terms of the number of modalities used, datasets employed, and baselines compared against, I believe the work provides a modestly sufficient contribution for ICLR. The exploration of various related and interesting aspects of this problem, such as different missing rates and LTA, is also noteworthy.\\n\\nConsequently, I am revising my score to marginally above acceptance.\"}",
"{\"title\": \"Responses to Reviewer M6ux's Comments and Suggestions\", \"comment\": \"**Exploring Missing Modality Ratios**\\nWe thank the reviewer for these insightful observations. This scenario is indeed valuable to explore. As noted in Appendix B.4 and Table 9, we report results for the mixed missing modality setup, where either modality may be absent. These results demonstrate that MiDl consistently outperforms the baseline under all tested conditions, including scenarios with mixed modality availability.\\n\\n\\n**Unimodal Clarifications** \\nWe appreciate the reviewer\\u2019s comments on clarifying the meaning of \\\"Unimodal.\\\" In our manuscript, unimodal refers to a model that uses only the always-present modality (e.g., video for Epic-Sounds and audio for Epic-Kitchens). We apologize for the confusion caused by the presentation in Table 1, where the unimodal result appears only at the 50% missing rate. To clarify, the unimodal results are constant across all missing rates, as the model relies solely on the non-missing modality, which remains unaffected by the missing rate of the other modality. To avoid redundancy, we initially reported the unimodal result once in the middle of the table, but we acknowledge that this presentation may have caused confusion. We have revised the manuscript to explicitly show the unimodal results across all missing rates for clarity.\\n\\n---\\n\\n**Architecture Diversity**\\n\\nWe thank the reviewer for this insightful comment.\\nWe would like to clarify that we do present results with different architectures and models. Specifically, we report results for self-attention-based models in Section 6.1 and Omnivore in Section 6.2. While we agree that further exploration with more diverse setups (e.g., convolutional backbones) could be valuable, our focus was on evaluating state-of-the-art and widely-used architectures, which are predominantly transformer-based.\\nWe appreciate the suggestion to reorganize Table 3 to present results with different architectures under a fixed missing rate. While our current presentation emphasizes performance across varying missing rates, we recognize that including architecture-level comparisons could provide complementary insights. We are committed to releasing the code, which we hope will enable further exploration of this problem from an architectural perspective.\\n\\n---\\n\\n**Computational Efficiency**\\n\\n**TTA vs. Retraining** \\nWe apologize for any confusion. As mentioned in Section 4 (Lines 270\\u2013275), we formulate the missing modality challenge within the test-time adaptation scenario. In this framework, we make no assumptions about the training process. Instead of retraining the network, we adapt it at test time by updating only a small subset of parameters.\\n\\n\\n**Omnivore experiment**\\nOur approach is designed to work with existing pretrained models, as demonstrated in our experiments, including the Omnivore pretraining test. This emphasizes the practicality of MiDl, as it eliminates the need for retraining on large datasets, aligning with our primary motivation.\\n\\n---\\n\\n**Minor Points**\\n\\n**Bolded Numbers Table 6** \\nWe thank the reviewer for catching this oversight. The correct value (55.2 in the \\\"Dl\\\" column) has been bolded in the updated manuscript.\"}",
"{\"title\": \"Responses to Reviewer wPGf's Comments and Suggestions\", \"comment\": \"**On comparison with related works addressing the missing modality problem**\\n\\nWe thank the reviewer for pointing out these related works. We would like to clarify that the proposed methods primarily address the missing modality problem during training. For example, Dai et al. investigate a strategy of randomly dropping video frames during training to improve the robustness of a multimodal system. Similarly, Lee et al. propose a method to train a network capable of generating audio features to handle missing modalities. Wang et al. focus on a multimodal learning approach that models shared and specific features for classification and segmentation tasks. \\nIn contrast, our work formulates the missing modality problem as a test-time adaptation challenge, a novel perspective that assumes no access to the training process or labels and instead addresses the problem entirely at test time. This distinction fundamentally differentiates our approach from the works cited, as our focus is on adapting trained models dynamically to optimize performance in the face of missing modalities. We have added these references in the main manuscript.\\n\\n\\nAs part of this framing, we compare MiDl against existing test-time adaptation methods, which are more aligned with the assumptions and constraints of our setup. Nonetheless, we appreciate the reviewer\\u2019s suggestion and will ensure these works are acknowledged in the related work section, highlighting the distinctions between training-time and test-time approaches to the missing modality problem. \\n\\n---\\n\\n\\n**On enriching the benchmark with additional TTA methods** \\n\\n\\nWe thank the reviewer for their valuable suggestions regarding enriching the benchmark with additional TTA methods. Our work already compares MiDl to several commonly used TTA methods to validate its effectiveness, including Tent (Wang et al.) and ETA (Niu et al.), which are explicitly mentioned in the manuscript. Results for these methods are presented in Tables 1, 3, and 7, showcasing their performance under different scenarios and comparing them to MiDl. \\n\\n\\nThe primary goal of our work is to redefine the missing modality problem as a test-time adaptation challenge, introducing a novel approach where pretrained models are adapted at test time to optimize performance in the face of missing modalities. We conduct extensive experiments, including ablations across various scenarios such as different backbones, pretraining strategies, and modality setups, to demonstrate MiDl\\u2019s effectiveness. \\n\\n\\nWhile we appreciate the suggestion to include additional methods like Contrastive TTA (Chen et al.) and Robust TTA in dynamic scenarios (Yuan et al.), we emphasize that our current comparisons and analyses already provide a comprehensive evaluation of MiDl\\u2019s performance. Future work could further expand on these comparisons to include additional methods for broader validation. \\n\\n\\n---\\n\\n\\n**On providing qualitative results and failure case analysis** \\n\\nThank you very much for this valuable suggestion. We are currently preparing qualitative examples comparing our approach with the baseline. These examples, along with an analysis of failure cases, will be included in the supplementary material in the revised submission. \\n\\n\\n---\\n\\n\\n**On generalizability beyond Epic-Kitchens** \\n\\nThank you for pointing this out. As mentioned in Section 5.1, we validate our approach on two distinct datasets: Epic-Sounds (Huh, et. al., 2023) and Epic-Kitchens (Damen, et. al, 2020). To align with the experimental setup of prior work, we assume different missing modalities for each dataset, with video missing in Epic-Kitchens and audio missing in Epic-Sounds. This demonstrates the adaptability of our method to varying modality configurations. \\n\\n\\n---\\n\\n\\n**On TSNE visualization of latent space** \\n\\nWe thank the reviewer for this insightful suggestion and their interest in understanding the effects of our method. While TSNE visualization is commonly used to illustrate feature learning and the clustering behavior of learned representations, we would like to emphasize that MiDl is not a feature learning approach in the traditional sense. Instead, it focuses exclusively on adapting pretrained models at test time by updating only the parameters of the normalization layers to handle missing modalities dynamically.\\n\\nThis design choice means that MiDl does not aim to significantly alter the learned feature space but rather adjusts the model's predictions to maintain robustness under test-time conditions. Consequently, the use of TSNE to visualize changes across epochs may not be directly relevant to evaluating MiDl\\u2019s effectiveness. \\n\\nIf the reviewer has specific aspects of the latent space or adaptation process they would like to see explored, we would be happy to incorporate such analyses to further enhance the interpretability of our method.\"}"
]
} |
1KvYxcAihR | TMGBench: A Systematic Game Benchmark for Evaluating Strategic Reasoning Abilities of LLMs | [
"Haochuan Wang",
"Xiachong Feng",
"Lei Li",
"Zhanyue Qin",
"Dianbo Sui",
"Lingpeng Kong"
] | The rapid advancement of large language models (LLMs) has accelerated their application in reasoning, with strategic reasoning drawing increasing attention.
To evaluate the strategic reasoning capabilities of LLMs, game theory, with its concise structure, has become the preferred approach for many researchers.
However, current research typically focuses on a limited selection of games, resulting in low coverage of game types.
Additionally, classic game scenarios carry risks of data leakage, and the benchmarks used often lack extensibility, rendering them inadequate for evaluating state-of-the-art models.
To address these challenges, we propose TMGBench, a benchmark characterized by comprehensive game type coverage, novel and diverse scenarios, and flexible game organization.
Specifically, we incorporate all 144 game types summarized by the Robinson-Goforth topology of 2×2 games, which are constructed as classic games in our benchmark.
Furthermore, we employ synthetic data generation techniques to create diverse, higher-quality game scenarios through topic guidance and human inspection for each classic game, which we refer to as story-based games.
Lastly, to provide a sustainable evaluation framework adaptable to increasingly powerful LLMs, we treat the aforementioned games as atomic units and organize them into more complex forms through sequential, parallel, and nested structures.
We conducted a comprehensive evaluation of mainstream LLMs, covering tests on rational reasoning, reasoning robustness, Theory-of-Mind capabilities, and reasoning in complex game forms.
The results revealed that
LLMs still have flaws in the accuracy and consistency of strategic reasoning processes, and their levels of mastery over Theory-of-Mind also vary.
Additionally, o1-mini, the latest reasoning model from OpenAI, was also evaluated across the sequential, parallel, and nested game structures and reached accuracy rates of 66.6\%, 60.0\%, and 70.0\%, respectively, highlighting the challenges posed by TMGBench. | [
"Large Language Models; Benchmark; Strategic Reasoning; Game Theory; Theory of Mind"
] | Reject | https://openreview.net/pdf?id=1KvYxcAihR | https://openreview.net/forum?id=1KvYxcAihR | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xW52WpE9DP",
"xAIUz6qTFe",
"wgNAcNlkBT",
"uJwoQNem0A",
"sYjoFpyAve",
"qD5ET6oNzL",
"p8xlT6bhA4",
"kXppcm1aBT",
"jlvUo53uRr",
"eKVs8J4779",
"d7fCCgkshv",
"cyofLfyoiD",
"av6CAccAfH",
"aVPa8da2iK",
"YZBaSolgZV",
"XPjGhleP85",
"WzOWFbet32",
"VjKE4yyOFF",
"QqGhdpS24v",
"QBj1997X4l",
"OufN9ya4Hx",
"Loj2uDQG97",
"LlpZfKoQC6",
"G1bSyERwqb",
"EDXmhYTVrY",
"DKD59a44tP",
"DHVoEQWs6J",
"AblEnl5VKG",
"4sMTFzrA0v",
"3s1H5MDrns",
"2bU7Bu54Es"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732203824551,
1732620004088,
1731833108088,
1733112433942,
1731988053020,
1733198070552,
1731830793195,
1737523803264,
1732840329841,
1732204823662,
1732673096592,
1731830983006,
1733219564579,
1731832987490,
1730688943215,
1730706672536,
1732761270469,
1730727549674,
1732761176180,
1733112665647,
1733112935634,
1729394987915,
1732580013422,
1732885988516,
1734933313163,
1733112279224,
1733198109946,
1733220748330,
1733214494639,
1733198141433,
1733220789503
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_AuNq"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_XhyT"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_XhyT"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_SyXy"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_cWxi"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_AuNq"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_XhyT"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_cWxi"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Area_Chair_o8cs"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Reviewer_cWxi"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6932/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for your careful review! (1/2)\", \"comment\": \"First, we thank you for giving kind advice and insightful opinions, and we will address your questions one by one.\", \"of_weakness_1\": \"**Question:** Lack of a clear definition for strategic reasoning.\\n\\n**Response:** We fully understand your concern about the lack of a clear definition of strategic reasoning and would like to provide further clarification. In the introduction of our paper, we referenced a survey paper titled \\\"A Survey of Strategic Reasoning with Large Language Models\\\", which defines strategic reasoning and discusses the key characteristics of tasks that require it. We used the concept derived from that, and we have already added more detailed explanations in the appendix to make it clearer for readers. Please refer to Appendix A of the latest revised paper. Here is a short explanation:\\n1. *Definition of Strategic Reasoning*: Strategic reasoning can be defined as the ability to make decisions based not only on one\\u2019s own actions but also on predicting and responding to the actions of others, especially in environments where outcomes depend on the interdependence of agents\\u2019 decisions. This definition distinguishes strategic reasoning from other forms of reasoning, such as common-sense reasoning or logical reasoning.\\n\\n2. *Characteristics of Strategic Reasoning*: The core feature of strategic reasoning is its reliance on anticipating and responding to the behavior of other participants.\\n\\n3. *Necessity and Applications of Strategic Reasoning*: Strategic reasoning is vital in various fields, including decision-making intelligence and social intelligence. For example, strategic reasoning is required in games like poker and chess, where players must predict and counter their opponents\\u2019 moves.\\n\\n---\\n\\n**Question:** Differences Between Our Benchmark and Existing Ones.\\n\\n**Response:**\\n\\n1. *Game Coverage*: TMGBench focuses on a specific but symmetric subset of 2x2 matrix games. Rather than incorporating a wide range of game types, we diversify the configurations and descriptions within this subset. While these games may share a common form representation, slight changes in numerical parameters or contextual framing can result in significantly different challenges for LLMs. Our experiments demonstrate that LLMs often become confused and perform poorly when only a few numbers are altered, highlighting their generalization issues in strategic reasoning.\\n\\n2. *Data Volume*: Each game in TMGBench contains several data points, ensuring a more comprehensive evaluation. This diversity in examples helps create a more robust and stable evaluation.\\n\\n3. *Scenario Diversity*: TMGBench introduces a wide range of scenarios within its game structures, offering diverse contexts in both numeric setups and story-based reframing. For example, existing benchmarks often focus solely on abstract payoff matrices, whereas TMGBench incorporates narrative elements to test how well LLMs adapt to different descriptions of the same strategic challenges. This diversity reflects real-world complexities, where decisions are rarely presented in isolated, numeric forms but instead in nuanced, contextual settings.\\n\\n4. *Game Extensibility*: TMGBench enables researchers to expand the benchmark by introducing additional forms of game compositions or more complex scenarios. For example, nested games can simulate hierarchical decision-making scenarios, like auctions or negotiations. This extensibility allows TMGBench to adapt to future developments in strategic reasoning research, offering long-term utility for evaluating increasingly sophisticated LLMs.\\n\\n---\\n\\n**Question:** Differences Between Strategic Reasoning and Other Types of Reasoning.\\n\\n**Response:**\\nStrategic reasoning differs fundamentally from other types of reasoning, such as common-sense reasoning, in that it involves considering the actions of other participants and predicting their behavior. For example, in common-sense reasoning, the focus is on making inferences based on factual knowledge, while in strategic reasoning, the focus is on understanding and anticipating the intentions and actions of other participants.\"}",
"{\"comment\": [\"Thanks very much for the rebuttal, which alleviates some of my concerns. Yet, I still find the contribution of the paper not strong enough for the following two reasons. Hence, I will maintain the initial rating.\", \"In the rebuttal, the authors said \\\"Strategic reasoning differs fundamentally from other types of reasoning, such as common-sense reasoning, in that it involves considering the actions of other participants and predicting their behavior.\\\" The authors mentioned poker and chess as examples. Yet, anticipating other players actions is not a necessity for playing well in poker. In fact, existing powerful AIs in poker don't predict other players actions, such as DeepStack, Libratus, Pluribus. For chess, it is a perfect information game, and there is no need to predict other players actions. To summarise, I am not fully convinced of the significance of the new benchmark presented in this paper.\", \"The experimental analysis in the rebuttal about how different LLM characteristics (e.g., model size, architecture, training data, or objectives) correlate with performance on TMGBENCH makes a good starting point but still looks incomplete. I would expect a more thorough investigation and ablations.\"]}",
"{\"title\": \"Thanks for your careful review! (2/2)\", \"comment\": \"**Significance:**\\n\\nOur contributions are summarized as follows, and we look forward to receiving your recognition.\\n\\n- **Granular Evaluation:** Our benchmark provides a finer-grained evaluation of current LLMs\\u2019 performance on strategic reasoning tasks. We specifically highlight the \\u201casymmetric pattern\\u201d observed in pairs of symmetric games, a phenomenon that reveals gaps in LLMs\\u2019 reasoning abilities and demonstrates the difficulty LLMs face in applying general strategies across different scenarios.\\n\\n- **Complex Game Compositions:** We also introduce the concept of more complex real-world games that can be constructed by combining atomic games in various forms. We demonstrate how these combinations create more challenging scenarios for LLMs and evaluate the performance of state-of-the-art LLMs in these more complex environments. Our work proposes several ways to combine simpler games into more complex scenarios, which opens up avenues for testing LLMs on a wider range of strategic reasoning tasks.\\n\\nMoreover, we would like to reveal TMGBench\\u2019s potential value for future LLM design. Our benchmark identifies key areas where LLMs need improvement:\\n\\n- **Long-Context Reasoning:** Our experiments identify mistakes in the internal process of LLMs\\u2019 strategic reasoning, which calls for better long-context reasoning ability.\\n- **Theory of Mind:** Our results highlight the need for more robust theory of mind capabilities, as LLMs still exhibit drawbacks when applying ToM (e.g., inconsistency, asymmetric patterns).\\n- **Understanding Multi-Participant Social Scenarios:** Through our data generation process, we observe that LLMs sometimes struggle to accurately understand social scenarios involving both conflict and cooperation.\\n\\nOverall, we see TMGBench as both a diagnostic tool for evaluating LLMs\\u2019 strategic reasoning capabilities and a guide for enhancing future LLMs with more complex and robust reasoning abilities.\\n\\nWe look forward to engaging in further discussions with you and receiving your additional guidance and feedback. Thank you!\"}",
"{\"title\": \"Thank you for your review and feedback\", \"comment\": \"Dear reviewer cWxi,\\n\\nThank you for your review and constructive feedback! We hope that our responses and revisions have addressed the concerns you raised. Upon your advice, we have already made our latest revision error-free and include more figures in the appendix sections to make the paper easier to follow. Please feel free to share any additional comments or suggestions. We greatly appreciate your thorough review and continued support.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"comment\": \"Thank you for providing detailed responses to my review and offering the new analysis on the issue of data leakage. The details help clarify some points that were not well expressed in the first version of the paper.\\n\\nThese replies have addressed aspects that were unclear in the presentation but have not substantially shifted my confidence in the results being presented due to the absence of statistical tests and apparent data leakage concerns revealing that the benchmark may be fundamentally subject to contamination.\\n\\n>In future revisions and subsequent work, we plan to include such statistical analyses.\\n\\nI look forward to seeing the results!\\n\\n>Regarding Figure 6, we will explore alternative visualizations that better highlight the differences between the classic and story-based settings, as suggested.\\n\\nI look forward to seeing this as well.\\n\\n> From the table, we observe that for advanced models such as gpt-4o, claude-3-5-sonnet, and Qwen2-72B, performance on the famous set of games does not consistently surpass (and in some cases is lower than) performance across all games. Conversely, for models like Llama-3.1-70B and gpt-4o-mini, the famous game set appears to be relatively easier. This is a fascinating finding and may indicate potential training data leakage for the more well-known games.\\n\\nThis is certainly interesting. To me this makes it clear that the benchmark needs some other means of organization, as the groups being aggregated may be mixing categories (like the famous vs not distinction here). \\n\\nIs there any way to show that the particular groups of games used are \\\"good\\\" groupings? This may be too vague to really answer.\"}",
"{\"title\": \"Look forward to your new feedback\", \"comment\": \"Dear reviewer AuNq,\\n\\nWe are very concerned whether our response has addressed your concerns and look forward to your new feedback.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"title\": \"Thanks for your careful review!\", \"comment\": \"We thank you for giving a high degree of recognition to our work and for effectively summarizing our core contributions.\\n\\nOne regret of this work is that we were unable to provide a more detailed background in the main text, due to the page limit, to help readers unfamiliar with game theory quickly grasp the designs of our TMGBench. We will revise the paper to include more introductory parts in the appendix to clarify some concepts.\", \"we_address_your_concern_as_follows\": \"**Question**: How to compute the standard answers to complex form games?\\n\\n**Response**: We present three kinds of complex forms in our work: sequential, parallel, and nested. In the sequential and parallel forms, the atomic games are independent of each other, so we directly compute the standard answer for each atomic game using the conclusions from the Robinson-Goforth topology. However, in the nested form (we explore the 2-folded nested form in our work), we compute the conditional Nash equilibrium using the functions below (This means that all answers can be automatically computed based on the rules, ensuring their strict correctness):\\n```\\ndef get_Nash_equilibrium(pA, pB, ban=None, banp=None):\\n # pA: player A's payoff matrix\\n # pB: player B's payoff matrix\\n # ban: the restricted situation\\n # banp: the restricted player\\n Nash_equilibrium_choice, Nash_equilibrium_result = [], []\\n for row in range(2):\\n for column in range(2):\\n alter_row, alter_column = 1 - row, 1 - column\\n if ban is not None and (row + 1, column + 1) == ban: continue\\n if (banp == \\\"A\\\" and (alter_row + 1, column + 1) == ban or pA[row][column] >= pA[alter_row][column]) \\\\\\n and (banp == \\\"B\\\" and (row + 1, alter_column + 1) == ban or pB[row][column] >= pB[row][alter_column]):\\n Nash_equilibrium_choice.append((f\\\"A{row + 1}\\\", f\\\"B{column + 1}\\\"))\\n Nash_equilibrium_result.append((pA[row][column], pB[row][column]))\\n return Nash_equilibrium_choice, Nash_equilibrium_result\\n\\u00a0\\ndef calc_conditional_NEs(task_id, ban, banp):\\n info = json.load(open(f\\\"dataset/families/{task_id}.json\\\"))\\n pA, pB = info[\\\"row payoffs\\\"], info[\\\"column payoffs\\\"]\\n return get_Nash_equilibrium(pA, pB, ban, banp)[0]\\n\\u00a0\\nactual_optimal_situation_pre_task = calc_conditional_NEs(pre_task_id, restricted_situation, restricted_player)\\n```\\n\\nWe hope these clarifications address your questions and look forward to further discussions and receiving your valuable guidance and feedback. Thank you!\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you for the additional efforts! It looks like the main concerns I have require more time to be developed, so I will maintain my current score.\\n\\n\\n\\n>Conversely, for models like Llama-3.1-70B and gpt-4o-mini, the famous game set appears to be relatively easier.\\n\\nI think there may be some confusion around the data contamination. I'm not sure how the PPL scores above would address the problem of data leakage on those games. I may be misunderstanding the details of your analysis above and how that connects to the observations about the famous games being easier. Does the new PPL data indicate that the famous games have equal perplexity scores as the non-famous? Or would the famous games perhaps be systemically easier in general?\"}",
"{\"title\": \"Thanks for your careful review! (2/2)\", \"comment\": \"Of Weakness 2:\\n\\nWe appreciate your suggestion for a more systematic analysis of how different LLM characteristics influence performance on TMGBench. We agree that such an analysis is crucial for understanding the factors that contribute to strategic reasoning abilities in LLMs. To address this concern, we performed additional experiments. Below are the results:\\n\\n| Model | PAR(\\u2191) | ID(\\u2193) | BD(\\u2193) | Difference |\\n| ---------------- | ------------------ | ------------------ | ----------------- | --------------------------- |\\n| Qwen2-7B | 21.30 | 22.72 | 39.71 | (baseline) |\\n| Qwen2-Math-7B | **63.17 (+41.87)** | **7.87 (-14.85)** | **14.47(-25.24)** | tuning dataset |\\n| Qwen2-72B | 46.21 | 19.94 | 29.29 | model size |\\n| Qwen2-Math-72B | **57.81 (+36.51)** | **11.07 (-11.65)** | **19.64(-20.07)** | tuning dataset & model size |\\n| Qwen2.5-7B | 48.26 | 11.51 | 18.43 | generation |\\n| Qwen2.5-Coder-7B | 44.62 | 13.27 | 26.48 | tuning dataset |\\n| WizardLM-7B | 21.24 | 28.84 | 57.52 | architecture |\\n\\nFrom the table, we observe that\\n1. *Effect of Dataset Tuning*: Math-tuned datasets significantly enhance performance in strategic reasoning tasks, while code-tuned datasets show limited improvements.\\n2. *Effect of Model Size*: Larger models (e.g., 72B) generally perform better than smaller ones (e.g., 7B). However, the marginal benefit of size increases appears to be smaller compared to the benefit from dataset tuning.\\n3. *Effect of Training Process and Objectives*: Models trained with different objectives (e.g., Qwen2-7B vs. WizardLM-7B) exhibit notable performance differences, highlighting the impact of pretraining strategies.\\n\\nWe hope these additional experiments provide insights for addressing your concern, and we plan to conduct more comprehensive experiments in our future work.\\n\\nWe look forward to engaging in further discussions with you and receiving your additional guidance and feedback. Thank you!\"}",
"{\"title\": \"Thanks for your follow-up comment!\", \"comment\": \"Thank you for the follow-up comment! We address the following concerns:\\n\\n**Concern 1: Absence of Statistical Analysis**\\n\\nWe sincerely appreciate the reviewer\\u2019s concern regarding the absence of statistical tests in our work. We acknowledge the importance of incorporating rigorous statistical analyses to enhance the robustness of our findings. While a comprehensive statistical evaluation of all comparisons is beyond the scope of this paper, we are willing to include detailed statistical testing in future work. To address your suggestion, we have conducted an example statistical analysis here, focusing on model comparisons.\\n\\nAs noted in line 348, *gpt-4o, gpt-4o-mini, and claude-3-5-sonnet are more capable compared to other models*. To further substantiate this claim, we performed a **Friedman rank test** to compare the performance of stronger models against weaker models:\\n\\n**Hypothesis H_0:** Model A and Model B have no significant performance difference over 144 games with CoT prompting.\\n**Hypothesis H_1:** Model A and Model B have a significant performance difference over 144 games with CoT prompting.\", \"the_results_of_the_test_are_presented_in_the_table_below\": \"| Model A | Model B | X\\u00b2 | Accept H\\u2080 (F \\u2248 3.06) |\\n| :---------- | :---------------- | :---- | :------------------- |\\n| gpt-4o | claude-3-5-sonnet | 0.44 | Yes |\\n| gpt-4o | gpt-4o-mini | 11.11 | No |\\n| gpt-4o | gpt-3.5-turbo | 75.11 | No |\\n| gpt-4o | Qwen2-72B | 29.34 | No |\\n| gpt-4o-mini | gpt-3.5-turbo | 43.34 | No |\\n\\nFrom the results, we observe that the null hypothesis (H\\u2080) is accepted for the pair (gpt-4o, claude-3-5-sonnet), indicating no significant performance difference between these two models. However, for other model pairs, such as (gpt-4o-mini, gpt-3.5-turbo), the null hypothesis is rejected, suggesting a significant performance difference.\\n\\nAlthough we cannot perform an exhaustive statistical analysis for all comparisons in this paper, we greatly value your feedback on the need for rigorous testing. We will incorporate more comprehensive statistical evaluations in our future work to further enhance the robustness and credibility of our findings.\\n\\n**Concern 2: Potential Data Contamination**\\n\\nThank you for your insightful comments on potential data contamination. In order to deeper resolve your concern on data contamination, we do additional analysis on our dataset:\\n\\n1. **Source Analysis:**\\n\\tOur dataset is **synthetic** and **template-based**, which significantly reduces the likelihood of explicit contamination. Since the data is generated using predefined templates and rules, there is a very low probability that any real-world data or previously encountered examples could seep into the dataset. This template-based approach helps maintain consistency across the examples, ensuring that the LLMs are evaluated purely on their strategic reasoning capabilities rather than being influenced by previously seen examples.\\n\\n2. **Perplexity (PPL) Analysis:**\\n\\tWe performed a **perplexity analysis** on the dataset, testing it with those open-source models. Our findings show that the PPL values are within reasonable ranges, indicating that the dataset does not exhibit typical signs of contamination:\\n\\n\\t- For **shorter classic data points**, the PPL values range from **8 to 10**:\\n\\n\\t| Model | Avg | Std |\\n\\t| ------------- | ------ | ------ |\\n\\t| Llama-3.1-8B | 8.9532 | 0.1733 |\\n\\t| Llama-3.1-70B | 8.0626 | 0.1552 |\\n\\t| Qwen2-72B | 9.0817 | 0.0913 |\\n\\n\\t- For **longer story-based data points**, the PPL values range from **3 to 6**:\\n\\n\\t| Model | Avg | Std |\\n\\t| ------------- | ------ | ------ |\\n\\t| Llama-3.1-8B | 5.0239 | 0.2308 |\\n\\t| Llama-3.1-70B | 4.2513 | 0.1921 |\\n\\t| Qwen2-72B | 3.8923 | 0.1389 |\\n\\n\\tThese PPL values suggest that the data is not overly predictable, indicating that the dataset is not contaminated. The variation in perplexity across different models demonstrates that there is no significant bias or discrepancy in predicting any of the data points. This further suggests that there are no obvious signs of data leakage. However, while we have not observed any direct evidence of leakage, we acknowledge that we cannot fully rule out the possibility of subtle contamination.\\n\\nHowever, we might be not able to address the concern of *alternative visualization of Figure 6* right now, sorry for that. (If you have some ideas on that, we will be very happy to discuss about it.)\\n\\nAgain, we still look forward to engaging in more discussions with you and receiving your additional guidance and feedback. Thank you!\"}",
"{\"title\": \"Thanks for your careful review!\", \"comment\": \"We sincerely thank you for your detailed comments and constructive feedback. Below, we address the specific concerns you raised:\\n\\n**Question**: The sequential/parallel form games seem not to hold the task of testing for strategic reasoning.\\n\\n**Response**: We sincerely apologize for not clearly articulating the significance and importance of complex games in our benchmark. It is worth further emphasizing that our motivation for designing complex form games is to highlight that the atomic games in TMGBench resemble the \\u201cprimary components\\u201d of complex real-world social scenarios.\", \"specifically\": \"- For sequential games, real-life scenarios often require making decisions one after another to solve problems.\\n- For parallel games, such as in diplomatic contexts, governments often need to simultaneously make decisions in multiple domains, including technology, military, politics, and culture.\\n- For nested games, as seen in scenarios like auctions, the decisions made in earlier auctions often influence subsequent ones.\\n\\nThus, the complex game forms we designed can effectively represent various complex strategic scenarios in real life, enabling a more in-depth evaluation of large language models\\u2019 strategic reasoning capabilities.\\n\\n---\\n\\n**Question**: Potential ambiguous prompt expression.\\n\\n**Response**: We understand your concerns. Prior to our formal experiments, we conducted extensive prompt engineering to ensure that the prompts we currently use can reliably test the models. Additionally, the experimental results in our formal tests have validated that the prompts effectively and consistently evaluate the models.\\n\\n---\\n\\n**Question**: Proofreading errors.\\n\\n**Response**: We are grateful that you meticulously pointed out parts of the text that may not be easy for readers to understand. We have revised and improved these sections in the paper accordingly based on your guidance (respectively on line 124, line 224, line 235, line 364, line 317, line 476).\\n\\n---\\n\\n**Question**: The potential cause of Llama70B performing worse on DA than 8B.\\n\\n**Response**: We acknowledge your observation regarding the underperformance of the DA prompt (Llama70B performing worse than Llama8B). Here is our interpretation of this finding:\\n- Positional Bias: One possible explanation is positional bias. In some cases, larger models may exhibit stronger biases towards certain choices rather than random ones, leading to suboptimal performance. We will analyze this phenomenon further in the data.\\n- Emergent Abilities: The poor performance of both Llama70B and Llama8B under the DA prompt suggests that these models do not exhibit emergent abilities without CoT prompting. This highlights the limitations of current LLMs in strategic reasoning tasks, even as model size increases.\", \"future_work\": \"We plan to include additional statistical analysis to strengthen our interpretation and discuss this observation more thoroughly in the experimental results section of the revised paper.\\n\\nWe look forward to engaging in further discussions with you and receiving your additional guidance and feedback. Thank you!\"}",
"{\"title\": \"Seek for improvement of overall rating\", \"comment\": \"Dear reviewer cWxi,\\n\\nThank you very much for your response and for your positive feedback on our work! We would appreciate any further suggestions you might have to help improve the overall rating. If no additional concerns remain, we kindly request that you consider updating the rating or recommendation, as we believe the previous issues have been thoroughly addressed.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"title\": \"Thanks for your careful review! (1/2)\", \"comment\": \"We thank you for genuinely providing valuable comments on our paper. We will address your concerns one by one.\\n\\n**Originality:**\\n\\nWe appreciate your question regarding the games covered in our benchmark relative to *\\u201cA Survey of Strategic Reasoning with Large Language Models\\u201d* and examples involving Theory of Mind (ToM). We acknowledge that strategic reasoning is a vast domain with many forms, and we chose to focus on a specific subset of games (within the subset, the atomic games have a common form but different configurations and descriptions), particularly those resembling scenarios like the Prisoner\\u2019s Dilemma, Stag Hunt, etc., where strategic reasoning is fundamental. These games are a subset of the larger landscape of strategic reasoning tasks, and even within this subset, we found that current LLMs struggle with reasoning consistently and accurately. \\n\\nIn summary, compared to previous work, our benchmark: (1) considers a more comprehensive set of 144 types of 2x2 games; (2) explores different contextual framings of the same game structure in greater depth; and (3) introduces three novel complex game forms\\u2014sequential, parallel, and nested\\u2014based on atomic games, which were not designed in prior studies.\\n \\nBy focusing on this category of problems, we aim to reveal specific deficiencies in LLMs\\u2019 strategic reasoning abilities, such as the \\u201casymmetric pattern\\u201d observed in symmetric games, which has not been well-studied in prior benchmarks. While other benchmarks may explore broader game-theoretic concepts, our contribution lies in systematically evaluating LLMs on a targeted and relevant class of strategic reasoning tasks.\\n\\n---\\n\\n**Quality:** \\n\\nThank you for suggesting the incorporation of statistical tests to assess differences between models. We fully agree that this would enhance the rigor of the paper and strengthen our findings. In future revisions and subsequent work, we plan to include such statistical analyses. \\n\\nWe also share your observation that some games, such as the Prisoner\\u2019s Dilemma, might be more commonly exposed to LLMs during training, potentially leading to better performance on those games. To investigate this, we collected experimental data and computed metrics for two distinct sets of games: the counter-diagonal set (which includes well-known games like the Prisoner\\u2019s Dilemma, Stag Hunt, and Battle of the Sexes) and the non-counter-diagonal set. The results are summarized in the table below (using DA prompting): \\n\\n| Model | PAR (\\u2191, Famous/Total) | ID (\\u2193, Famous/Total) | BD (\\u2193, Famous/Total) | \\n|--------------------|-----------------------|-----------------------|-----------------------| \\n| gpt-4o | 46.88/52.08 | 16.93/16.81 | 25.52/28.49 | \\n| gpt-4o-mini | 29.17/14.06 | 27.99/39.52 | 61.20/56.21 | \\n| gpt-3.5-turbo | 40.63/30.21 | 25.39/27.64 | 50.78/50.15 | \\n| claude-3-5-sonnet | 45.83/59.38 | 16.67/14.79 | 33.33/27.76 | \\n| claude-3-haiku | 29.17/24.31 | 33.33/39.58 | 83.33/72.22 | \\n| Llama-3.1-70B | 29.17/13.02 | 12.50/36.15 | 25.00/40.71 | \\n| Llama-3.1-8B | 12.50/18.75 | 43.75/38.49 | 87.50/81.32 | \\n| Qwen2-72B | 33.33/43.06 | 25.00/26.30 | 33.33/35.59 | \\n\\nFrom the table, we observe that for advanced models such as **gpt-4o**, **claude-3-5-sonnet**, and **Qwen2-72B**, performance on the famous set of games does not consistently surpass (and in some cases is lower than) performance across all games. Conversely, for models like **Llama-3.1-70B** and **gpt-4o-mini**, the famous game set appears to be relatively easier. This is a fascinating finding and may indicate potential training data leakage for the more well-known games. \\n\\nWe acknowledge that this raises a significant and valuable research question, and we plan to explore this direction further in future work. Your observation has been instrumental in highlighting an area that warrants deeper investigation. Thank you for bringing this to our attention.\\n\\n---\\n\\n**Clarity:**\\n\\nWe appreciate your suggestions for improving clarity. We revise the introduction in our new revision on line 91 and line 94 respectively. Also we include another appendix section to provide clearer explanations for some terms.\\nRegarding Figure 6, we will explore alternative visualizations that better highlight the differences between the classic and story-based settings, as suggested.\"}",
"{\"summary\": \"This paper proposes a benchmark TMGBENCH. TMGBENCH incorporates 144 game types based on the Robinson-Goforth topology of 2\\u00d72 games and provides three forms (sequential, parallel, and nested) to construct more complex games using those 144 game types. Several LLMs were compared on the benchmark using several quantified metrics to identify their strengths and weaknesses.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The paper is very well-written.\", \"Objectives are clear, and how those objectives are achieved by this work is well demonstrated.\", \"Quantified metrics and visualisations have been used to compare LLMs on different tasks to assess their capabilities.\", \"Extensive experiments were conducted to exam the failure cases and the effect of ToM.\", \"Limitations were also discussed.\", \"Generation pipeline was demonstrated in Appendix.\", \"Overall, the reviewer quite enjoyed reading this paper.\"], \"weaknesses\": \"No particular weakness was identified by the reviewer. The reviewer is not an expert in game theory or reasoning. It is quite likely that the reviewer is unfamiliar with some pieces of related work or crucial part of this work.\", \"questions\": \"It is stated that \\u201cTheoretically, using these atomic games, we can expand the framework to generate infinitely many increasingly complex game forms.\\u201d However, standard answers are required to compute the inconsistency map. The reviewer wonders how to obtain the standard answers to newly generated games?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors create TMGBench, a game theory based benchmark for testing the strategic reasoning abilities of LLMs. They create a large number of games based on the \\\"Robinson-Goforth topology of 2x2 matrix games\\\" as well as utilizing synthetic data generation to build on top of said games for further game development. The games are then combined in a variety of ways, creating a complex structure for the LLMs to reason in. The authors then evaluate a selection of LLMs on the benchmark and report their results.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": [\"Models are tested rigorously; 2,880 times for a single model in the single game tests, the complex games have a baseline of being tested 20 times, and there's testing for positional bias with the reFoToM / reSoToM prompts.\", \"Extensibility: this is a great way of creating a difficult-to-overfit-to benchmark, using the synthetic data generated stories as additional \\\"games\\\" to play.\", \"The metrics used (ID, BD, PAR) are comprehensive for evaluating a model's performance and good insight to how the models perform in these situations.\", \"The tables and figures nicely present the findings of the experiments and are mostly given good descriptions.\"], \"weaknesses\": [\"The paper can be hard to follow at times. It would be nice to have examples of the complex games to solidify the reader's understanding. The description given for sequential games doesn't quite make sense to me, even with two introductions. And because of that, I'm not sure how well it upholds the task of \\\"testing for strategic reasoning\\\".\", \"I'm not convinced that parallel forms are actually a test of strategic reasoning either, this seems closer to measuring the model's \\\"working memory\\\" and being able to keep track of the different situations at a given time step. But, this may be based on a misunderstanding of what the form is describing; it's not clear to me based on the descriptions given.\", \"The prompt given for `Example of classic game: classic/111` gives me pause for the rest of the prompt generation. \\\"Player A and Player B are playing a game. Either of them has two choices, namely A1, A2/B1, B2.\\\" Is this telling the model that the choices are {A1, A2} or {B1, B2}? I assume this, but that could lead to the model being confused about the task rather than being honestly judged on the difficulty of the task.\", \"a number of simple proofreading errors:\", \"\\\"sequential form, where LLMs are required to response multiple game tasks in a row\\\" --> \\\"to respond to multiple games\\\"\", \"\\\"As explained in Section 2.2, our benchmark are perfectly suitable\\\" --> your benchmark what?\", \"\\\"as for practical result provide by LLMs,\\\" --> results provided by\", \"\\\"which we expect robuster LLMs\\\" --> \\\"more robust LLMs\\\", I'm not sure if \\\"robuster\\\" is a word, but if it is it's not commonly used.\", \"\\\"using CoT prompting, which is robuster\\\"\", \"\\\"We perform 4 independent tests on each data point, covering both the classic setting and the story-based setting. Basically, we conduct 2,880 tests to generally evaluate a certain model\\\"\", \"this is weird, \\\"Basically, we conduct 2,880 tests...\\\" these should be combined to make flow better.\", \"\\\"We setup the test by divided it into several types\\\" --> \\\"by dividing it\\\"\"], \"questions\": [\"interesting that llama70B did worse on DA than 8B, why do you think this is?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for your follow-up comment!\", \"comment\": \"Thanks for your follow-up comment! We greatly appreciate the time and constructive comments you have provided.\\n\\n---\\n\\n**Concern 1: Does AI need strategic reasoning in games like poker or chess?**\\n\\nWe understand your concern that state-of-the-art models in games like poker and chess may not need to explicitly predict every move of other players. Indeed, reinforcement learning (RL) models, such as those used in DeepStack, Libratus, and Pluribus, optimize their strategies based on probabilistic reasoning and game-theory principles, without explicitly predicting each opponent\\u2019s move.\\n\\nHowever, large language models (LLMs) differ from these RL-based models in several ways. LLMs do not perform exhaustive search within the game tree, nor do they have a precise reward model. Instead, they generate responses based on patterns learned from large datasets, making their decision-making more analogous to human players. In games like chess or poker, an LLM-driven player does not calculate all possible outcomes exhaustively but predicts moves based on contextual understanding\\u2014much like humans who reason strategically, even without full information.\\n\\nIn a previous study [1] on LLMs and strategic games like UNO, it was found that while RL models generally excel at strategy optimization, models like GPT-4, when enhanced with reflection modules, can outperform RL models in certain strategic contexts. However, most LLMs still lag behind RL models in terms of overall performance. This underscores both the current limitations and the untapped potential of LLMs in complex game scenarios.\\n\\nAdditionally, we would like to clarify that the application of *Theory of Mind (ToM)* plays a critical role in these scenarios, whether for human players or LLM-based agents. ToM enables an agent to consider the mental states of other participants, which is crucial for strategic reasoning. In another study [2], LLM agents were elicited to utilize ToM capabilities for playing Guandan, a variant of poker, and it was shown that incorporating ToM consistently improves their performance against opposing agents.\\n\\nAs for chess, while it is a perfect-information game, the state space is vast, and LLMs, like human players, rely on heuristic reasoning rather than exhaustive search. Due to this, LLMs may not be able to explore the entire state space, but they can still reason strategically by leveraging learned patterns.\\n\\n---\\n\\n**Concern 2: Seemingly Incomplete Investigation On How Different Factors Affect Models Performance**\\n\\nWe acknowledge that the initial findings presented are a starting point and that a more detailed exploration of how different LLM characteristics influence performance on TMGBench would be valuable. In our current study, we focused on a limited set of factors due to some constraints (time, cost, etc.), but we are planning to expand this analysis in future work, incorporating more comprehensive ablation studies and further investigations into model architecture, training data, and model size. We hope to provide a more thorough analysis in future iterations of this research, which will enhance the robustness of our findings and contribute to the overall understanding of LLMs' strategic reasoning abilities.\\n\\n---\\n\\nWe hope this clarifies our approach and the future directions we plan to pursue. Once again, thank you for your feedback, and we value your continued engagement with our work!\\n\\n[1] Qin, Zhanyue, et al. \\\"UNO Arena for Evaluating Sequential Decision-Making Capability of Large Language Models.\\\" *Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing*. 2024.\\n\\n[2] Yim, Yauwai, et al. \\\"Evaluating and enhancing llms agent based on theory of mind in guandan: A multi-player cooperative game under imperfect information.\\\" *arXiv preprint arXiv:2408.02559* (2024).\"}",
"{\"summary\": \"The paper introduces TMGBENCH, a benchmark for systematically evaluating the strategic reasoning abilities of LLMs. By evaluating some LLMs on TMGBENCH, the paper identifies several flaws in LLMs\\u2019 performance, such as low accuracy rates and unstable inconsistency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and well organized.\", \"The games included in TMGBENCH are comphrehensive.\"], \"weaknesses\": [\"I am not fully convinced there exists the need for a benchmark fo evaluating strategic reasoning abilities of LLMs. In fact, there lacks an universal definition of the ability of strategic reasoning. In other words, what are the fundemental differences between tasks that require strategic reasoning and tasks that do not?\", \"If there is a clear definition of strategic reasoning, I would expect a more systematic study of existing LLMs on strategic reasoning. Why some LLMs perform better than others in terms of strategic reasoning? What are the influencing factors of LLMs? Data, Architecture, Model Size, training objectives?\"], \"questions\": [\"Regarding weakness 1:\", \"Do you have a clear definition of tasks that require strategic reasoning, as used in this paper?\", \"Could you explain more on how TMGBENCH addresses gaps in existing benchmarks for evaluating LLM reasoning capabilities?\", \"What are the fundemental differences between tasks that require strategic reasoning and tasks that do not, perhaps with concrete examples?\"], \"regarding_weakness_2\": [\"Could you conduct an analysis of how different LLM characteristics (e.g., model size, architecture, training data, or objectives) correlate with performance on TMGBENCH? and why.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for your follow-up comment!\", \"comment\": \"Thank you for the follow-up comment! We address the issues as follow:\\n\\n---\\n\\n**Advice: Adding References About the Role of Atomic Games**\\n\\nThank you for your helpful advice. In response, we have added a subsection in **appendix F** to clarify the role of atomic games, along with the relevant reference, to better explain their significance in our work.\\n\\n---\\n\\n**Question: Metrics of a \\\"Satisfactory\\\" Prompt**\\n\\nWe conducted extensive testing on the prompt, focusing on three key aspects:\\n\\n1. **Task Validity**: Task validity refers to how well the LLM understands the task described in the prompt and follows the format requirements to produce an error-free response. A satisfactory prompt should ensure high task validity, meaning the model can comprehend the task and generate appropriate outputs with error less frequently.\\n\\n\\tThis metric is represented by *accuracy rate (acc)*:\\n\\t$$\\n\\tacc = \\\\frac{\\\\sum_{i=1}^N \\\\mathbb{I} ( parsable(r_i) ) }{N}\\n\\t$$\\n\\twhere $N$ means the sampled query times, $r_i$ refers to the $i$-th response, $parsable(\\\\cdot)$ is the function to check if the response have correct format. The larger *acc* means higher task validity.\\n\\n2. **Response Consistency**: Response consistency measures the stability of the LLM's responses when given the same prompt multiple times. Our goal in prompt engineering is to minimize **aleatoric uncertainty**, which refers to the inherent randomness or noise in the prompt itself. While consistency is important, it's crucial to note that **epistemic uncertainty**\\u2014which stems from the model's limitations\\u2014may still remain, even with a fixed prompt. This highlights the model\\u2019s true capabilities and its potential limitations in handling certain tasks.\\n\\n\\tThis metric is represented by *gini index (gini)*:\\n\\t$$\\n\\tgini = 1 - \\\\sum_{i=1}^T p_i^2\\n\\t$$\\n\\twhere $T$ is the number of types of answer (derived from error-free response) in total tries, $p_i$ indicates the proportion of the $i$-th type of answer to the total tries. The larger *gini* means higher response consistency.\\n\\n3. **Paraphrase Similarity**: Paraphrase similarity evaluates how well the LLM maintains the quality and correctness of its responses when the prompt is paraphrased with similar meaning. A satisfactory prompt should ensure that the model generates consistent responses, regardless of how the task is phrased, demonstrating its robustness to variations in input phrasing.\\n\\n\\tThis metric is represented by *average deviation (dev)*:\\n\\t$$\\n\\tdev = \\\\frac{\\\\sum_{i=1}^M sim(R_P, R_{P_i})}{M}\\n\\t$$\\n\\twhere $M$ is the number of paraphrased prompt of target prompt $P$, $P_i$ indicates the the prompt of the $i$-th paraphrase. $sim(\\\\cdot, \\\\cdot)$ is the function computing the similarity of two response set (using $R$ to represent response set). The smaller *dev* means higher paraphrase similarity.\\n\\nRegarding the candidate prompt, we first manually constructed some and then iteratively sampled alternative prompts by paraphrasing and rephrasing them with different LLMs.\\n\\n---\\n\\n**Question: No Further Demonstration of Complex Form Games**\\n\\nWe apologize for the inconvenience caused by the lack of concrete examples for the complex form games, which might have made the paper hard to follow. To clarify, we have added a new figure in a subsection of **appendix F** to better illustrate the different complex forms and their application in our framework. We hope this addition will provide a clearer understanding of how these games work.\\n\\n---\\n\\nWe look forward to engaging in further discussions with you and receiving your additional guidance and feedback. Thank you!\"}",
"{\"title\": \"Thank you for your review and feedback\", \"comment\": \"Dear reviewer SyXy,\\n\\nThank you for your review and constructive feedback! We hope that our responses have sufficiently addressed the questions and concerns you raised. We would greatly appreciate your continued support and any additional comments or suggestions you may have.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"title\": \"Thank you for your review and feedback\", \"comment\": \"Dear reviewer XhyT,\\n\\nThank you for your review and constructive feedback! We hope that our extra responses can resolve the rest of the concerns you had. We have already thoroughly explained our viewpoint about the issue of data contamination in the comments and included sufficient evidential references. Please feel free to share any additional comments or suggestions. We greatly appreciate your thorough review and continued support.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"summary\": \"The paper presents a benchmark for strategic reasoning comprised of all 2x2 game ordinal payoff arrangements. Additional evaluation capabilities include testing agents when reasoning on compositions of these games (in parallel, sequentially, or where one game influence the choices in a subsequent game) and reframing the games in story-based scenarios. Evaluations study open and closed source LLMs on this benchmark, assessing: how well they produce optimal choices, the extent to which they exhibit asymmetrically biased responses when payoff matrices are flipped, and using theory of mind to improve performance. The results demonstrate that existing LLMs do not saturate the benchmark, have varying degrees of bias based on the payoff structure and story framing, and struggle to leverage theory of mind to improve results.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"# originality\\nModest.\\n\\nEvaluating LLMs in strategic reasoning games is a thoroughly investigated topic (as attested by the related work). Examining anti-symmetric reasoning patterns is a question I have not seen probed before and important to consider for this setting in general.\\n\\n# quality\\nModest.\\n\\nExperiments demonstrate the benchmark can find differences among LLMs. Models fail to saturate the success criteria, particularly for more stringent requirements like perfect answering or demonstrating theory of mind. Biases based on the generated stories show there is clear room for improving LLM context sensitivity, however it is not clear how much this could be mitigated by different prompts for the strategic reasoning (a dimension not explored in the paper).\\n\\n# clarity\\nModest.\\n\\nThe introduction was vague and hard to follow without reading the rest of the paper. Experiments are documented well. Some figures were hard to parse or could use a different presentation (notes below).\\n\\n# significance\\nModest.\\n\\nThere are numerous evaluations for strategic reasoning in game theoretic games. This focuses on 2x2 games, omitting multi-agent agents or repeated/multi-turn games (excepting the composite games tested). The paper will be of some interest to the community focusing on this subset of LLM capabilities.\", \"weaknesses\": \"Note: These weaknesses are phrased as questions to facilitate discussion.\\n\\n# originality\\nHow do the games in this benchmark cover those not covered in the \\\"game theory\\\" subsection of the cited paper \\\"A Survey of Strategic Reasoning with Large Language Models\\\"? Or for the \\\"Societal Behavior\\\" examples that include theory of mind?\\n\\n\\n# quality\\n\\nThe experiments should include statistical tests when claiming differences among model types. At least in the cases where multiple runs were possible or multiple scenarios are being aggregated (for example, in Table 1 and Figure 5). Many claims seem plausible, but the tests are there to provide rigor.\\n\\nThe paper would benefit from evaluating the concern stated in the introduction that there is scenario leakage of common game forms. Was there evidence of scenario leakage based on the games in Robinson-Goforth topology results? Do the games most likely to be leaked (like Prisoner's Dilemma) demonstrate substantial performance differences relative to other games?\\n\\n\\n# clarity \\n\\nThe introduction could be clearer on details later explained in the paper. Examples: \\n- \\\"performance variability marked by coefficients\\\" - Coefficients of what?\\n- \\\"marked by an asymmetric pattern\\\" - What asymmetric pattern?\\n\\nFigure 6 is hard to read. It might be better plotted by showing the differences in scores between the classic and story-based settings instead.\\n\\n# significance\\n\\nWhat are the key insights we learn from TGMBench that were not revealed in prior benchmarks? This is not very clearly articulated in the paper and would help establish it's originality and significance. As TGMBench is a benchmark, the value it provides derives in exposing LLM capabilities that are not already apparent in alternatives.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for addressing the comments I brought up.\\n\\n> re: sequential/parallel form games for strategic reasoning\\n- Thank you for elaborating on this point. From your comment it's not clear if the \\\"atomic games as primary components of complex social scenarios\\\" idea was already mentioned in the paper and I overlooked it. If it wasn't included in the paper, adding this would shift my evaluation of the paper and preferably including references that establishes this notion.\\n\\n\\n> Potential ambiguous prompt expression\\n- You mention, \\\"we conducted extensive prompt engineering to ensure that the prompts we currently use can reliably test the models\\\". I'd want to see the metrics you decided on that satisfied the \\\"reliably testing the models\\\". Currently the reader has no insight as to what was decided upon for a prompt being \\\"satisfactory\\\" or what the search space of prompting techniques were. It may be the case that there's little prompting research for this niche (which I honestly doubt, there's a decent amount of game theory literature that has likely looked into relevant prompting techniques), but mentioning that lack of previous research along with your guiding principles for 1. developing and 2. accepting a prompt would give the reader context on what to expect here.\\n- Note, I'm focusing on these prompts as they are the medium through which model insights become available to us. It may be the case that a model will show greatly different results across the benchmark given different prompting strategies, and being able to track this gives the reader credence to what attention was paid to this during this study.\\n\\n> proofreading\\n- One suggestion I'd make for making paper updates during rebuttals is to include the changes in red text to make easier for the reviewer to identify the exact changes; a \\\"visual diff\\\" of sorts. Just a small note for future reference.\\n\\nFinally, there was no mention of making the paper easier to follow (adding game examples, description of the sequential games). I would still be interested in seeing this.\"}",
"{\"title\": \"Thanks for your further comment!\", \"comment\": \"Thank you so much for your further comment! We greatly appreciate your engagement and the opportunity to clarify our analysis.\\n\\nTo address your concerns, we would like to emphasize that the conclusions drawn from our source and perplexity (PPL) analyses are not contradictory to the observations about famous vs. unfamous games\\u2014\\u2014our TMGBench has a relatively low risk of data leakage, as outlined below.\\n\\nFirst, the dataset of TMGBench features on the synthetic context and we use a standardized template to develop it, and even for the **story-based** part, they have such long context (**200\\\\~450 tokens** per data point, as shown in **Figure 11** in the paper). This substantially lowers the likelihood of any entire paragraph being exposed to LLMs during training.\\n\\nAlso, for the **classic** part, from our original experiments and additional ablations, it is possible that some games have been exposed to LLMs, but **it will not be a risk**, and we provide some reasoning as below:\\n\\n1. The normal level of perplexity tells us that there is **low possibility** of exposing an **entire same** data point of our TMGBench to LLMs, for the definition of perplexity is to measure to what extent LLM can correctly predict next token based on the preceding context.\\n2. Some finding of the paper, along with the additional finding on famous vs. unfamous games, indicate that some LLMs might be **familiar** with some of the games, while this kind of **familiarity**, is not derived from the leakage of our data point. Instead, it is most probably originated from their **emerging ability** that they utilize to apply such **familiar** knowledge to do better in reasoning. For example, the training corpus of a LLM may include content which indirectly related to a famous game like *The Prisoner\\u2019s Dilemma* rather than an unfamous games, so LLMs have a chance to know more information about the game with similar game structure. (This is similar to solve a math problem or a card game, both need some pre-knowledge/experience which can boost ones\\u2018 performance, right?)\\n3. Actually, in many prior studies [1, 2, 3, 4, 5] , some classic games (not necessarily bi-matrix games, but well-known in game theory or economics) are still being employed to evaluate if LLMs can conduct strategic reasoning like humans or even perform better. Compared to these work, TMGBench have a lower risk of data leakage because our data points are **much longer** and we use a **synthetic** method which incorporates **much more variables**.\\n\\nIn conclusion, while famous games might be easier for LLMs due to their inherent familiarity, this familiarity is not a consequence of TMGBench data leakage. Instead, it reflects the models' ability to leverage prior knowledge, further demonstrating the value of TMGBench in assessing strategic reasoning.\\n\\nOnce again, thank you for your feedback, and we value your continued engagement with our work!\\n\\n---\\n\\n[1] Aher, Gati V., Rosa I. Arriaga, and Adam Tauman Kalai. \\\"Using large language models to simulate multiple humans and replicate human subject studies.\\\" *International Conference on Machine Learning*. PMLR, 2023.\\n\\n[2] Horton, John J. *Large language models as simulated economic agents: What can we learn from homo silicus?*. No. w31122. National Bureau of Economic Research, 2023.\\n\\n[3] Guo, Jiaxian, et al. \\\"Suspicion-agent: Playing imperfect information games with theory of mind aware gpt-4.\\\" *arXiv preprint arXiv:2309.17277* (2023).\\n\\n[4] Duan, Jinhao, et al. \\\"Gtbench: Uncovering the strategic reasoning limitations of llms via game-theoretic evaluations.\\\" *arXiv preprint arXiv:2402.12348* (2024).\\n\\n[5] Mei, Qiaozhu, et al. \\\"A Turing test of whether AI chatbots are behaviorally similar to humans.\\\" *Proceedings of the National Academy of Sciences* 121.9 (2024): e2313925121.\"}",
"{\"metareview\": \"The paper introduces an LLM benchmark based on game theory games (prisoners dilemma and its ilk, not games that people actually play). Reviewers are lukewarm about the paper, partly because of a perceived dearth of novelty and justification, which I consider not to be reason enough to reject the paper. More importantly in this case, there seems to be quite a few outstanding questions about many issues, including potential data leakage for some games, the differentiation between this and other similar benchmarks, and the clarity of the prompts. The authors attempted to address the concerns of the reviewers, but not always in a convincing way.\", \"additional_comments_on_reviewer_discussion\": \"The authors attempted to address the concerns of the reviewers, but not always in a convincing way.\"}",
"{\"title\": \"Thank you for your review and feedback\", \"comment\": \"Dear reviewer AuNq,\\n\\nThank you for your review and constructive feedback! We hope that our responses and revisions have addressed the concerns you raised. We have carefully explained our viewpoint in comments and incorporated additional sections to make our standpoint more clearly. Please feel free to share any additional comments or suggestions. We greatly appreciate your thorough review and continued support.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"title\": \"Look forward to your new feedback\", \"comment\": \"Dear reviewer cWxi,\\n\\nWe are very concerned whether our response has addressed your concerns and look forward to your new feedback.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"title\": \"seek for latest feedback\", \"comment\": \"Dear reviewer XhyT,\\n\\nGiven the rebuttal deadline, we kindly request your latest feedback at your earliest convenience. Thank you for your understanding and prompt attention to this matter.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"comment\": \"Thank you for addressing my comments and questions.\", \"i_have_updated_my_score_from\": \"- Presentation: 1 --> 2\\n- Contribution: 3 --> 4\\n\\nStating the soundness on game composition makes this benchmark more robust. Also, including the examples helps the paper make more sense.\\n\\nYou do a great job of explaining your prompt metrics, data should be collected on this and included (along with these descriptions) in the paper itself.\"}",
"{\"title\": \"Look forward to your new feedback\", \"comment\": \"Dear reviewer XhyT,\\n\\nWe are very concerned whether our response has addressed your concerns and look forward to your new feedback.\\n\\nBest,\\n\\nPaper 6932 Authors\"}",
"{\"title\": \"seek for latest feedback\", \"comment\": \"Dear reviewer AuNq,\\n\\nGiven the rebuttal deadline, we kindly request your latest feedback at your earliest convenience. Thank you for your understanding and prompt attention to this matter.\\n\\nBest,\\n\\nPaper 6932 Authors\"}"
]
} |
1KLBvrYz3V | Century: A Framework and Dataset for Evaluating Historical Contextualisation of Sensitive Images | [
"Canfer Akbulut",
"Kevin Robinson",
"Maribeth Rauh",
"Isabela Albuquerque",
"Olivia Wiles",
"Laura Weidinger",
"Verena Rieser",
"Yana Hasson",
"Nahema Marchal",
"Iason Gabriel",
"William Isaac",
"Lisa Anne Hendricks"
] | How do multi-modal generative models describe images of recent historical events and figures, whose legacies may be nuanced, multifaceted, or contested? This task necessitates not only accurate visual recognition, but also socio-cultural knowledge and cross-modal reasoning. To address this evaluation challenge, we introduce Century -- a novel dataset of sensitive historical images. This dataset consists of 1,500 images from recent history, created through an automated method combining knowledge graphs and language models with quality and diversity criteria created from the practices of museums and digital archives. We demonstrate through automated and human evaluation that this method produces a set of images that depict events and figures that are diverse across topics and represents all regions of the world.
We additionally propose an evaluation framework for evaluating the historical contextualisation capabilities along dimensions of accuracy, thoroughness, and objectivity. We demonstrate this approach by using Century to evaluate four foundation models, scoring performance using both automated and human evaluation. We find that historical contextualisation of sensitive images poses a significant challenge for modern multi-modal foundation models, and offer practical recommendations for how developers can use Century to evaluate improvements to models and applications. | [
"historical",
"contextualisation",
"image",
"dataset",
"multimodal",
"VLM",
"evaluation"
] | Accept (Spotlight) | https://openreview.net/pdf?id=1KLBvrYz3V | https://openreview.net/forum?id=1KLBvrYz3V | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yuJR2qRH7B",
"oYbxcU6xsS",
"oOWPc3B3Mt",
"hkVnlrd1fF",
"cPns6qU7bI",
"Xdcya4GKEr",
"VjBgyCSM8f",
"CFGz3dp3ja",
"AyJxRGdvJN",
"3dMDvXJyGd",
"2lHgk0Xp7h",
"1r4ZOLPg6t",
"1MuYTPdJYy"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment"
],
"note_created": [
1732356460591,
1731709815695,
1737523435883,
1730656938788,
1732636322655,
1730260212554,
1730627940249,
1732662407538,
1730581848150,
1732581429232,
1733932136516,
1731709628968,
1732553547481
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1110/Reviewer_qqDa"
],
[
"ICLR.cc/2025/Conference/Submission1110/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission1110/Reviewer_Ciz2"
],
[
"ICLR.cc/2025/Conference/Submission1110/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1110/Reviewer_qqDa"
],
[
"ICLR.cc/2025/Conference/Submission1110/Reviewer_zRQb"
],
[
"ICLR.cc/2025/Conference/Submission1110/Reviewer_18jS"
],
[
"ICLR.cc/2025/Conference/Submission1110/Reviewer_18jS"
],
[
"ICLR.cc/2025/Conference/Submission1110/Reviewer_zRQb"
],
[
"ICLR.cc/2025/Conference/Submission1110/Area_Chair_Yxbz"
],
[
"ICLR.cc/2025/Conference/Submission1110/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1110/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for the author feedback. Most my concerns have been addressed. I'd like to raise the rating to accept.\"}",
"{\"comment\": \"We thank the reviewer for their thoughtful comments on our manuscript.\\n\\nWe note the reviewer\\u2019s concern on the dataset scale. We designed the dataset for use in evaluations of historical contextualisation capabilities of multi-modal models - not for model training or fine-tuning. With this dataset, developers will be able to assess how well models contextualise sensitive historical images. They can use Century to inform post-training and deployment decisions\\nWhile we invite future work on improving system capabilities, the dataset does not include \\\"target\\\" responses of how systems responded to the different classes of queries for the images in this dataset (ie. what is being evaluated in Table 3). We only release the images and related metadata - a point we make clearer in the updated draft (lines 320-323). We do hope that the dataset enables application and system developers to make their own ethical and normative choices in their deployment context (possibly adapting or building on our work).\\n\\nIn constructing our evaluation task, we developed several criteria along which we test for the \\u201cgoodness\\u201d of the historical contextualisation response. Surprisingly, even though models are almost certainly exposed to images and their contexts in training, they perform poorly even on fairly objective elements of the task such as identification. This can be clearly seen in the accuracy results we provided in Table 3 of our manuscript where we showed that the best model achieved 53.3% accuracy on this task. Because of this and the other empirical results in Table 3, we don't think training on \\u201ctarget\\u201d responses would be likely to lead to improvements along even the accuracy dimensions of our criteria, let alone others that we evaluated in this work.\\n\\nWe cite evaluation frameworks related to historical contextualisation from images in the Related Works section, but to our knowledge, Century is the first benchmark focused on evaluating multi-modal generative models on historical contextualization of images. We have added the two suggested citations on automated metadata annotation and evaluating visual reasoning in context-rich scenarios in our discussion of how Century builds on previous efforts to measure visual reasoning and contextualisation (line 473, 475).\\n\\nWe understand the reviewer\\u2019s concerns on the potential bias in the benchmark. The most thorough analyses of potential biases in Century can be found in Table 8 in Appendix K, where we look at the distribution of figures, events, and locations represented in the dataset by UN world subregion. We find that as per the human evaluation labels, every subregion is represented by at least 5.1% of the images in Century, indicating that our dataset contains a relevant amount of images from all areas of the world. Unsurprisingly, we also find that images taken from Wikipedia are skewed in their distribution (with images from Western Europe and North Americas most prominently featured), but we do find evidence that Century contains images from all areas of the world. \\n\\nWe make recommendations on mitigation strategies that could be applied in future work in the Limitations, calling upon researchers to integrate participatory perspectives and build upon the representativeness of the work in a targeted way (e.g. contributing images of a specific culture). We also are excited to see future work build on our methods for creating the dataset, and adapt them to other historical data sources (eg, cultural archives).\\n\\nRegarding the fit of this paper in ICLR, we are responding to the call for papers, which explicitly seeks work related to \\\"generative models\\\" and \\\"datasets and benchmarks\\\" and \\\"societal considerations including fairness, safety, privacy.\\\" Our work directly speaks to multiple subject areas in the CFP. Additionally, previously accepted papers in ICLR 2023 and 2024 include papers that center contributions through datasets and benchmarks:\\n* MIntRec2.0: A Large-scale Benchmark Dataset for Multimodal Intent Recognition and Out-of-scope Detection in Conversations, ICLR 2024\\n* SWE-Bench: Can Language Models Resolve Real-World GitHub Issues?, ICLR 2024 (oral)\\n* MEDFAIR: Benchmarking Fairness for Medical Imaging, ICLR 2023 (notable-top-25%)\\n\\nWe thank the reviewer for their careful engagement with the paper and their suggestions to strengthen the manuscript, which we have included in the updated version. If we have addressed the reviewer\\u2019s most pressing concerns, we kindly ask the reviewer to consider adjusting their score to reflect this. Otherwise, we are looking forward to proceeding with the discussion and incorporating any further feedback to our manuscript.\", \"title\": \"Response to Reviewer qqDa\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}",
"{\"summary\": \"This paper is about \\u201cCentury\\u201d a dataset and framework designed to evaluate the ability of multi-modal models to contextualize sensitive historical images accurately, thoroughly, and objectively. To build the dataset, images were sourced with knowledge graphs, language models, and they were processed according to museum practices, considering especially recent historical events, figures, and locations, with images that may hold socio-cultural significance. The idea is to address the representation of historical nuance in generative AI and proposes an evaluation protocol for assessing models on tasks requiring socio-cultural and historical contextualization. After the construction of the Century dataset, it is tested with recent private foundation models. The paper reports that these models have difficulties addressing the complexity of historical contextualization.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The proposed interdisciplinary dataset focus on historical contextualization and represent a valuable contribution to the field, addressing a crucial gap in existing evaluation methodologies.\", \"The work should be well reproducible with the released dataset, search terms, and evaluation details. Every part of the work is well detailed and released. Authors have put significant effort into this.\", \"The paper is well written, well structured and all parts, also detailed in the appendix, are well informative.\"], \"weaknesses\": [\"the use of Wikipedia raises concerns about biases inherent of the platform. Wikipedia\\u2019s coverage of historical events is not uniform across regions or cultures, potentially leading to an overrepresentation of certain perspectives. Anyway, the limitation is acknowledged and is anyway a first step into the right direction.\", \"the definition of \\\"sensitive\\\" is based on interpretations from museums and archives, which seems a good starting point. However, I wonder about whose perspectives are considered \\\"sensitive\\\" and who gets to define them. Maybe some input from the communities whose histories are represented in the images should be considered, but I understand the difficulty of doing that.\"], \"questions\": [\"Since the release of new LLMs are very frequent, I wonder what could be done to further automatise the evaluation on the dataset.\", \"I believe the dataset could potentially be misused to train models that generate biased or harmful content related to sensitive historical events. What do you think about this aspect?\", \"Could the limited representation of certain communities in the dataset be harmful for training of future models based on this dataset? I'm not sure about its inclusiveness and how to not perpetuate existing biases.\"], \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"details_of_ethics_concerns\": [\"In my opinion:\", \"The dataset could potentially be misused to train models that generate biased or harmful content related to sensitive historical events.\", \"The limited representation of certain communities in the dataset could be harmful for training of future models based on this dataset, I'm not sure about inclusiveness and how to not perpetuate existing biases.\"], \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer 18jS\", \"comment\": \"We appreciate the reviewer's thoughtful feedback on our manuscript. We're pleased they recognised the paper's comprehensive approach to addressing a significant evaluation challenge using a diverse dataset.\\n\\nRegarding the reviewer's suggestion to evaluate open-source models, we agree that this would be valuable for the community. However, most open-source multimodal models (like PaliGEMMA) require substantial fine-tuning to handle complex tasks like historical contextualization. The few systems that are fit for this purpose are not available for use due to their licenses, so we were unable to evaluate these systems directly. To encourage open investigation, we've made our dataset and evaluation methods publicly available, enabling researchers to explore how current and future systems (including new tokenization strategies, inference-time compute methods, and composite systems) impact historical contextualisation capabilities.\\n\\nAs recommended, we've expanded the limitations section (lines 502-506; 513-517; 524-530) to address potential unintended consequences of our methodology. We also discuss short-term mitigations for developers to avoid potential pitfalls when optimizing for Century, alongside longer-term improvements.\\n\\nWe've added clarification to the captions of Figures 6 (page 23) and 11 (page 34) to improve their interpretation, addressing the reviewer's concerns about the figures.\\n\\nRegarding demographic data for raters, we are required to store rater data with obfuscated identifiers, which means demographic data for individual raters are not easily recoverable. We acknowledge this as an important addition to future work, especially participatory work that seeks to identify the perspectives of a specific group of people on the historical contextualisation task. We provide details on our the recruitment strategy decisions that likely influenced ater pool composition in lines 305-307 and Appendices L and T. \\n\\nWe thank the reviewer for their careful review and insightful comments, which have strengthened our manuscript.\"}",
"{\"summary\": \"The authors present Century, a dataset of 1,500 sensitive historical images curated from recent history. It is generated using an automated process that combines knowledge graphs and language models, guided by criteria from museum and digital archive practices to ensure a balanced representation of global events and figures. The dataset is validated through both automated and human evaluations, demonstrating its diversity and comprehensiveness. Additionally, the authors introduce an evaluation framework to measure historical contextualization along dimensions of accuracy, thoroughness, and objectivity, applying it to assess the performance of four foundational models, with both automated metrics and human feedback supporting the results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-articulated and clear, enhancing readability and accessibility.\\n\\nAddressing sensitive historical images is a compelling topic with high relevance, and the proposed framework is both innovative and thoughtfully executed.\\n\\nThe methodology for identifying and curating sensitive historical images, integrating knowledge graphs with language models, provides a scalable approach with potential research applications across history and AI.\\n\\nThe Century dataset could serve as a valuable resource for researchers working on similar challenges, including those focused on historical image representation, automated content generation, and bias mitigation.\", \"weaknesses\": \"I'm a bit concerned about the dataset scale. At 1,500 images, the dataset may be too small to train deep learning models directly, potentially limiting its use in large-scale AI training scenarios. A dataset size of more than 10K images would be a good estimation for training models.\\n\\nFurthermore, as a new framework, the effectiveness of Century could benefit from comparative analysis with existing datasets or similar historical image frameworks. This would provide a clearer benchmark of its strengths and limitations. If there are not closer frameworks, some related research might also help in comparison, such as the following papers for your reference: \\n\\nWu, Mingfang, et al. \\\"Automated metadata annotation: What is and is not possible with machine learning.\\\" Data Intelligence 5.1 (2023): 122-138.\\n\\nWadhawan, Rohan, et al. \\\"ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models.\\\" arXiv preprint arXiv:2401.13311 (2024).\\n\\nFinally, the authors candidly discuss certain biases, particularly concerning dataset distribution and generative labeling. These limitations could impact future applications, and additional mitigative strategies would strengthen the framework's applicability.\", \"minor\": \"It is unclear to me whether a dataset-centric paper with a focus on historical content aligns fully with ICLR\\u2019s primary scope, which typically emphasizes innovations in machine learning.\", \"questions\": \"Have the authors conducted any formal bias testing within the dataset? Is it possible to elaborate on potential approaches the authors have considered for addressing these biases. Understanding how these biases may clarify the power of the dataset, the impact of model outcomes, and outlining potential mitigation strategies, would further enhance the dataset\\u2019s robustness for future research.\\n\\nHave the authors considered ways to expand the dataset or if they envision it being used primarily for evaluation rather than training.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper presents a new dataset for evaluating multimodal models\\u2019 capability to describe historical and sensitive images in terms of several criteria, including factual errors and due weight. The images in the dataset are carefully chosen so that they are sensitive, controversial, and/or commemorative. The evaluation protocol includes automated evaluation and human evaluation. The paper gives some recommendations for evaluating models with the dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I think this evaluation is important conceptually and in the application level. One expectation to a foundation model may be to generate unbiased (or not one-sided) descriptions of sensitive events, and the proposed dataset can serve as a benchmark in this regard.\\n\\nAlso, the paper recommends that human evaluation is still critical even though LLMs can evaluate a target model, which is fair. According to Table 3, foundation models and humans do not look consistent, and evaluation solely by the automated protocol seems insufficient. The paper seems faithful to this evaluation.\", \"weaknesses\": \"I think the dataset is useful for the application level, while it\\u2019s not clear from the technical level what aspects of a model it tries to evaluate. The proposed evaluation task seems to require (1) identification of the event, person, etc. depicted in the image, (2) associating the identified entities with the corresponding historical event (so that it can give a contextualized description), and (3) describing the image in a fair and objective way. I think (1) involves the perceptual capability of a model, while (2) and perhaps (3) involves the knowledge the model has. (3) may also involve the criterion of goodness of generated description used in the training. The proposed protocol evaluates a model without being aware of these different aspects (the paper partly mentions this point in Section 5.1), which makes the interpretation of the result extremely hard. I understand that as the foundation model users rarely have knowledge about how the model is trained, it\\u2019s not straightforward to isolate these different aspects. However, without some ways to interpret the results (as described in Section 5.1 as a future application of the dataset), insights that the dataset will provide may be limited.\\n\\nThe paper is often hard to read. I don\\u2019t see what the dataset offers (does it contain only images or some example descriptions of events?) in the main paper.\", \"questions\": \"I would like to see some discussion on the first point in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"thank you\", \"comment\": \"Dear authors, thank you for your thorough response. I am happy with the clarifications and the submission is still in line with my original score of accept.\"}",
"{\"summary\": \"This paper introduces Century, a dataset with 1,500 images of sensitive historical images (including a new method to identify images like those in the dataset). Along with Century, the authors propose an evaluation framework to measure how well models do at \\u201chistorical contextualization.\\u201d\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"S1. The authors tackle a problem that many researchers shy away from or do not even consider, as historical contextualization is a complex task and has no objective ground truth. This paper is a thorough, high-quality effort to 1) help understand our models through this lens, and 2) highlight the importance of historical contextualization abilities in large vision-language models.\\n\\nS2. The paper is very well-written; the methods and results are presented in a straightforward manner and thoroughly-discussed.\\n\\nS3. Century is a diverse dataset with a decent balance across regions, content, and image type. The dataset can always be more diverse and balanced along these axes, but it is a respectable collection for evaluation given that its limitations are acknowledged.\", \"weaknesses\": \"W1. The evaluations are done on closed-source models, which are helpful in illuminating their capabilities given that we don\\u2019t know much about their data or architecture. However, it would be incredibly useful to benchmark open-source VLMs alongside them, as the associations with training data, architecture, etc. and historical contextualization abilities can help the community to identify how to score better on this benchmark.\\n\\nW2. I would love to see a more thorough limitations section. While the points covered are valid and important, there is so much nuance to the dataset, evaluation metrics, etc. The community would benefit from a paper that not only presented a useful dataset and benchmark for historical contextualization, but thoroughly (to a best approximation) enumerated the pitfalls one could fall into when maximizing performance on this benchmark, and described the demographic and geographic distribution of human evaluators.\\n\\nW3. Some of the figures seem to be missing legends, or at least are not clear enough in what the colors mean (Figures 6 and 11). I assume the x-axis is labeled 1-5, but the colors and lack of x-axis label are a bit confusing.\", \"questions\": \"Q1. Is it possible to recover the geographic and demographic distribution of the human evaluators? That data seems especially important to consider for historical contextualization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"I appreciate the authors for detailed responses. I think the paper is valuable for the application, and I think the response makes sense. I'll update the score.\"}",
"{\"metareview\": \"The paper proposes a dataset of ~1500 contextualized historical images, in order to test capabilities and modern methods in inferring this contextualization. The authors show existing methods do not handle this historical data well. The authors make an effort to cope with biases; some concerns about ethics and misuse, as well as choice of methods tested, are raised and addressed to a reasonable but not complete degree. The main contribution of the paper, to shed light on how historical or politically charged imagery may be described by current models, is significant despite these concerns.\", \"additional_comments_on_reviewer_discussion\": \"Several reviewers responded that their concerns were addressed\"}",
"{\"title\": \"Response to Reviewer zRQb\", \"comment\": \"We thank the reviewer for thoughtful comments that scrutinise how historical contextualisation \\u2013 a higher-level capability our benchmark claims to measure \\u2013 may consist of multiple lower-level fundamental capabilities, including but not limited to entity recognition, scene understanding, and world knowledge.\\n\\nHow different capabilities may be expressed when an AI system performs a complex task, such as historical contextualisation, is an important research question. We note that the reviewer proposed unpacking into low-level tasks is not the only possible approach.\\nIn order to output a contextualised description of a historical image, such as a photograph from World War 2, a given model may be invoking any number of latent capabilities. Which capabilities are expressed and how they are encoded is likely to vary across model specifications, such as its technical architecture, training procedure and multimodal corpora it is trained on.\\n\\nMeasuring specific lower-level capabilities has been the focus of many previous work (for example, MS-Celeb-1M for entity recognition, OKVQA for scene understanding, InfoSeek for world knowledge, Dollarstreet for fairness in object recognition, TallyVQA for object counting, DocVQA for OCR in context). While there is established coverage for low-level vision tasks, we identified a gap in the available evaluations that inspired us to create Century: there was no measurement for how well multi-modal models could contextualise and reason about images grounded in real-life events in open-ended text generation tasks.\\n\\nWhile our focus has been on measuring the higher-level historical contextualisation capability, we hope that releasing Century along with an evaluation protocol will enable additional studies of the interplay between low-level and high-level capabilities as pointed out by the reviewer. \\n\\nIn this work, we propose a first decomposition which partially overlaps with the proposed (1) identification (2) association (3) fair / objective description breakdown.\\n\\nWe target Capability (1), the identification of the event or person depicted in the image, in our evaluation method as the \\u201ccorrect identification\\u201d question (Table 3). We ask auto-raters and human raters to identify if the target model correctly names the entity depicted in the image, providing a measure of the \\u201cperceptual capability\\u201d of the model. \\n\\nFor (2), the reviewer makes an interesting point about the capabilities that may be necessary to associate figures and events depicted to their historical context. Qualitatively, we find that different images are more difficult to contextualise given the image alone (e.g. identifying the context behind a photograph of a famous political figure is more straight-forward than associating an image of a crowd with a specific historical event). We discuss this point in the \\u201cRecommendation for Developers\\u201d section, but have pulled a longer discussion into results section to provide guidance on interpreting results in light of the differences in contextualisation difficulty (lines 425 - 473). \\n\\nFor (3), or the \\u201cgoodness\\u201d of the generation, this is covered by several of the evaluation criteria, including \\u201cfactuality,\\u201d which evaluates if inaccuracies or factual errors are present in model output, and \\u201cappropriate summary,\\u201d which evaluates how much relevant detail on the event depicted is present in the model output. \\n\\nThese partial overlaps in our decomposition approach and the approach recommended by the reviewer may indicate that the two approaches are complementary.\\n\\nTo the reviewers\\u2019 concern on the paper not fully describing dataset contents, we have added content in Section 3 to clearly state what fields are included in the datasets we are open-sourcing (lines 320-323). Alongside the publication, we will link the Github page, which will also describe the different fields contained in the dataset.\\n \\nWe thank the reviewer for their careful engagement with the paper and their suggestions to strengthen the manuscript, which we have included in the updated version. If we have addressed the reviewer\\u2019s most pressing concerns, we kindly ask the reviewer to consider adjusting their score to reflect this. Otherwise, we look forward to proceeding with the discussion and incorporating any further feedback to our manuscript.\"}",
"{\"title\": \"Response to Reviewer Ciz2\", \"comment\": \"We thank the reviewer for their thoughtful review and appreciation of our contribution. We are pleased the reviewer recognizes the value of Century in addressing a key evaluation gap.\\n\\nRegarding the reviewer's concern about potential biases, we acknowledge that platforms like Wikipedia, from which we draw data, may introduce bias. However, our analysis (Table 8 in Appendix K) shows that Century includes images from all UN world subregions, with each represented by at least 5.1% of the total images. While there is an over-representation of Western Europe and North America, likely inherited from Wikipedia, Century still demonstrates a global reach.\\n\\nInitially, we intended to pursue direct collaboration with cultural heritage institutions and archives to increase representation for certain groups, but later found this was not feasible for this project. However, we believe Century provides a strong foundation for future partnerships and research with these institutions.\\n\\nWe agree that incorporating participatory methods to assess image sensitivity is crucial, and we have highlighted this direction in our discussion under \\\"Lack of targeted inclusion of affected communities.\\\" By releasing Century with geographical labels for each image, we hope to facilitate future research that includes participatory perspectives, including ethnographic studies focused on under-represented populations.\\n\\nIn response to the reviewer's questions:\\n\\n* Automating Century evals: We demonstrate the performance of off-the-shelf foundation models as raters for the historical contextualization task. Table 3 (pg 3) demonstrates the directional alignment between the human ratings and autorater decisions. However, we believe more work is necessary to increase the accuracy and calibration of automated evaluation, and recommend retaining human evaluation for final model comparisons given the sensitivity and subjectivity of the task.\\n\\n* Potential misuse of the dataset: We believe the risk of misuse is low. We do not release any raw model outputs or human/auto-rater signals that could be used to train models to generate harmful outputs. Section 3 clarifies the specific fields included in the open-sourced datasets (lines 320-323).\\n\\n* Exacerbating biases through training: Century is not intended as a training dataset, so we do not anticipate it contributing to bias amplification in that way. However, we recognize the importance of improving dataset representativeness to enable effective evaluation of model performance across diverse communities.\\n\\nWe thank the reviewer again for their thorough review and appreciate their view that this work offers a compelling contribution to the ICLR community.\"}"
]
} |
1JhSJIYX3p | Large Language Models Engineer Too Many Simple Features for Tabular Data | [
"Jaris Küken",
"Lennart Purucker",
"Frank Hutter"
] | Tabular machine learning problems often require time-consuming and labor-intensive feature engineering.
Recent efforts have focused on using large language models (LLMs) to capitalize on their potential domain knowledge.
At the same time, researchers have observed ethically concerning negative biases in other LLM-related use cases, such as text generation. These developments motivated us to investigate whether LLMs exhibit a bias that negatively impacts the performance of feature engineering. While not ethically concerning, such a bias could hinder practitioners from fully utilizing LLMs for automated data science.
Therefore, we propose a method to detect potential biases by detecting anomalies in the frequency of operators (e.g., adding two features) suggested by LLMs when engineering new features. Our experiments evaluate the bias of four LLMs, two big frontier and two small open-source models, across 27 tabular datasets. Our results indicate that LLMs are biased toward simple operators, such as addition, and can fail to utilize more complex operators, such as grouping followed by aggregations. Furthermore, the bias can negatively impact the predictive performance when using LLM-generated features. Our results call for mitigating bias when using LLMs for feature engineering. | [
"LLMs",
"feature engineering",
"bias",
"tabular data",
"automated data science"
] | https://openreview.net/pdf?id=1JhSJIYX3p | https://openreview.net/forum?id=1JhSJIYX3p | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xoBMPLm7ap",
"xkiMWgkLXj",
"sndYXddr4z",
"qpeqxACzn7",
"kGSj2M7QNU",
"goKGsyiJZM",
"gjFcyjbll3",
"eRO2RlB0lO",
"bL9e1bCTq6",
"Zw17zVMKQB",
"Z1AzFQu9oS",
"VKAQFWrcSt",
"QhaOvn4Dpn",
"KMgascOFY5",
"IIFaxxKRCe",
"EKhMBNww5V",
"DaqdcZolwB",
"6JSqTIdnWb",
"26svZ3YuIk",
"0tuIFbfsSG",
"0tO1L81kRs"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732094403658,
1731921280594,
1732526944323,
1730690879550,
1732742828159,
1731921575513,
1733297164888,
1733297067598,
1732526915990,
1731921445045,
1730569253945,
1730698158009,
1731921663239,
1732742089839,
1731921222656,
1732742492962,
1732742153507,
1731921500693,
1730583399421,
1732526872576,
1732685345663
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Reviewer_oDSt"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Reviewer_31oL"
],
[
"ICLR.cc/2025/Conference/Submission7277/Reviewer_JGk8"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Reviewer_bAV4"
],
[
"ICLR.cc/2025/Conference/Submission7277/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7277/Reviewer_oDSt"
]
],
"structured_content_str": [
"{\"title\": \"Revision\", \"comment\": [\"Dear Reviewers,\", \"We sincerely appreciate the time and effort you dedicated to reviewing our submission. Your feedback has helped to improve our paper. Below, we present the revision changes, which we marked green in the submission. Additionally, we have addressed all of your concerns raised in your reviews respectively under your reviews.\", \"Our paper was strongly motivated by the success of different feature engineering methods with LLMs, most notably CAAFE, which we clarified in the introduction. We additionally highlighted the influence of CAAFE in our proposed feature generation method, which we clarified further in the related work section.\", \"We further explained the metric \\u201crelative improvement in predictive accuracy\\u201d, which we use several times in our \\u201cResults\\u201d section when comparing different LLMs to OpenFE. We clarify that we here compare systems without feature engineering to systems with feature engineering (performed by LLMs and OpenFE, respectively).\", \"We reformulated the takeaways of the additional experiment. We clarified that this experiment ensures that the experienced bias towards the simple operators is not a positional bias induced by our prompt template.\", \"We highlight the necessity of researchers finding ways to strengthen LLMs effectively for tabular data problems so that they can be used effectively.\", \"We provide a critical difference plot to highlight the statistical significance of our results in Appendix G.1\", \"In Appendix G. 2, we compare operator selection frequencies on a subset of benchmark datasets between GPT-4o-mini and the more powerful GPT-4o to highlight that the bias is similar in these models. This leads us to believe that using more powerful models does not fix this bias.\", \"We provide a comparison of frequencies with which each feature of a dataset is selected by the LLMs to generate new features in Appendix G.3\"]}",
"{\"comment\": \"## Questions\\n\\n> \\u201cIf OpenFE is considered ground truth, why not compare directly to OpenFE final generated feature set?\\u201d\\n\\nSimilar to previous work on text generation [6], we are looking for trends in the distribution of the output of an LLM to obtain a representative, meaningful conclusion. \\n\\nA direct \\u201cmatch\\u201d comparison to OpenFE\\u2019s feature set would fail to capture this. For example, if the LLM suggested using different but still complex operators such as GroupByThenMean instead of OpenFE\\u2019s GroupByThenRank, that would not prompt a negative bias. Yet the LLM would have \\u201cfailed\\u201d in the direct comparison to OpenFE. \\n\\n> *why not look at the original features that were input into this operation?* \\n\\nWe have added an additional figure for that in Appendix G.2 Figure 11. Here, we present the frequencies with which the LLM selects each feature from each dataset. As apparent, in most cases, the LLM is reasonably sure which features it would like to transform, as indicated by the relatively high frequencies for certain features. However, it is then, in our opinion, limited by the fact that it repeatedly uses the same simple operators on these features instead of trying different operators on the same problem (an approach similar to random search, which the method would suggest one would do if one had 1: no understanding of the underlying context, 2: an unbiased opinion on all operators). This further strengthens our conclusion that this is a bias problem regarding the existing operators.\\n\\nWe sincerely hope that we were able to clarify your questions and concerns. If you have any more questions or concerns regarding our work, we are happy to answer them as well. \\n\\nFurthermore, we would be grateful for any specific pointers to improve our work further.\\n\\nThank you very much, and kind regards,\\nThe Authors\\n\\n## References:\\n[1] Chatbot Arena, https://lmarena.ai/\\n[2] Hollmann et al. \\u201cLarge Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering\\u201d, Conference on Neural Information Processing Systems (2023), https://arxiv.org/abs/2305.03403\\n[3] Guo et al. \\u201cDS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning\\u201d, ICML (2024), https://arxiv.org/abs/2402.17453\\n[4] Malberg et al., \\u201cFELIX: Automatic and Interpretable Feature Engineering Using LLMs\\u201d, Joint European Conference on Machine Learning and Knowledge Discovery in Databases (2024), https://link.springer.com/chapter/10.1007/978-3-031-70359-1_14\\n[5] Zhang et al, \\u201cELF-Gym: Evaluating Large Language Models Generated Features for Tabular Prediction\\u201d, Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, (2024), https://dl.acm.org/doi/abs/10.1145/3627673.3679153 \\n[6] Liang et al., \\u201cMonitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews\\u201d (2024),\", \"https\": \"//arxiv.org/abs/2403.07183\"}",
"{\"comment\": \"Dear Reviewer bAV4,\\n\\nWe would again like to thank you for your insightful comments regarding our submission. We hope that our revision resolved your questions and concerns. We would like to kindly ask, if our revisions influenced your score of our submission. If any further clarification is needed, please let us know.\\n\\nThank you, and kind regards,\\nThe Authors\"}",
"{\"summary\": \"The paper explores the featuring engineering powers of LLM with OpenFE as the baseline. The authors perform experiments on 27 datasets and 4 LLMs. The primary findings are the following.\\n\\n1. LLM perform worse than the baseline.\\n2. Proprietary \\\"Large\\\" models perform worse than small \\\"open\\\" models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The authors experimentally show the limitations of LLMs for feature engineering. The experimental setting is convincing.\", \"weaknesses\": \"1. The conclusions of the paper are along expected lines and are not surprising. A more notable contribution would be to address the limitations.\\n2. The statistical significance of the results is not provided.\\n3. The term \\\"bias\\\" is too strong for the problem explored. The authors can use the word \\\"limitation\\\".\", \"questions\": \"1. What is the statistical significance of the results shown in Table 3?\\n2. Why aren't larger models in GPT and Gemini family not explored?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"RE: Dispute Over the Use of the Word Bias\", \"comment\": \"Dear Reviewer oDSt,\\n\\nWe would like to reiterate your comments about bias, as there seems to be a significant disconnect between our opinions. It appears you are rejecting our work solely because we called the observed phenomenon a bias, even though we addressed all your other concerns.\\n\\n**We strongly believe that bias is the correct formulation to use in our work, following from its definition, the related work you referenced, and the context of our work.**\\n\\nThe definition of bias is [1]: \\n> \\u201cA tendency, inclination, or leaning towards a particular characteristic, behaviour, etc.;\\u201d, \\u201cDistortion of a statistical result arising from the method of sampling, measurement, analysis, etc.;\\u201d\\n\\nThe first definition strongly matches our work as we have found a clear tendency of LLMs to lean towards a particular characteristic (i.e., simple operators). \\n\\nAdditionally, from your referenced paper by McCoy et al. (Section 3.3): \\n> \\u201cSpecifically, we predict that LLMs will be biased toward producing high-probability sequences of words, meaning that their performance will get worse when the correct output is in fact low-probability.\\u201d\\n\\nThis matches our work and our general assumption that the usage of complex operators is just too sparsely represented in training data, as you also stated in your response, which leads to the LLM rarely selecting these operators, even though they can be useful in many cases. **Note that the work you referenced specifically names this a \\u201cbias\\u201d** for exactly these situations where the LLM is biased because of low probabilities in the output sequence. \\n\\nThe origin of the bias does not change the fact that it is still a bias. A model that was trained on biased data (e.g., in fairness problems) is still considered biased. \\n\\nLastly, as mentioned in our related work section where we consider \\u201cBiases in LLMs\\u201d, we are aware of the traditional usage of the word bias in a social context and clearly stated that we are looking at something similar but different (Line 98).\\n\\n\\n\\nKind Regards,\\n\\nThe Authors\\n\\n---\\n[1] http://www.oxforddictionaries.com\"}",
"{\"comment\": \"Dear Reviewer bAV4,\\n\\nWe highly appreciate the effort taken to review our paper and thank you for the valuable feedback. We are happy to see your satisfaction with our general experimental workflow. Additionally, we thank you for highlighting one of our takeaways - that one has to be careful when using an LLM as a fix-all solution - as they are possibly limited by negative biases. We carefully considered the concerns you raised in your review and aim to address them below\\n\\n## Weaknesses\\n\\n> *The main issue with this paper is that it is rather unclear why the usage of LLMs for this task was explored at all* \\n\\nWe base our work on the existence of different applications of LLMs in regard to feature engineering for tabular data. We reviewed and presented them in the \\u201cRelated Work\\u201d section of our paper, where we included a paragraph on \\u201cFeature Engineering with Large Language Models\\u201d. Apart from multiple different approaches in this area of work, our work was mainly motivated by the paper \\u201cCAAFE - Context Aware Automated Feature Engineering\\u201d which introduced a novel approach to using LLMs to generate new features on a tabular dataset. Driven by the success of this method (successful improvements in predictive accuracy on 12/14 benchmark datasets, results from CAAFE paper), our own investigation led us to the bias we discovered in LLMs when employing them for such a task, a problem which in our opinion can\\u2019t be neglected when using LLMs for feature engineering. Moreover, a problem that follow-up work for CAAFE seems to be ignorant of so far. \\n\\nWe added the reference from related work to the introduction to clarify the relationship and motivation. \\n\\n> *Frankly, it's also not a task that I would intuitively expect LLMs to be good at, \\u2026, expert domain knowledge or lived experience, \\u2026*\\n\\nWe generally agree with you about LLMs' capabilities in that regard. However, all existing related work (and papers submitted to ICLR) would disagree with us and have applied LLMs to this task. We note that most of them also do not perform memorization tests because they want to show their method is better while we try to really understand the limitation (resulting from the bias) for the downstream tasks of LLMs. \\n\\nLikewise, our primary motivation is not to expect the LLM to act as a highly knowledgeable expert on any given topic but rather to exploit some capabilities of understanding semantic context information with a given dataset, a set of information usually available for tabular datasets. Existing black-box feature engineering methods do not use such context information, which motivated most related work. The general motivation is therefore that this information can carry weight, but it is currently not used in most methods.\\n\\n> *and no actual attempt is made towards the #2 target*\\n\\nOur work is a call to action for the community, related work, or concurrent work. Likewise, it functions as a warning to practitioners or Kaggle experts who are actively using or considering using LLMs for their applications. \\n\\nWe believe our contribution highlights the unexpected existence of a bias that prompts poor performance, which is valuable research. Moreover, it is research that proposes a method to detect such a bias that requires scrutiny of peer review. Otherwise, practitioners and related or concurrent work will never consider investigating such a problem in their own systems. \\n\\nIn conclusion, our work provides a meaningful contribution even though we did not address #2 in the paper. \\nNevertheless, we would like to mention that we tried to address #2 in our own systems and applications but have not found a solution that avoids bias (via in-context learning or fine-tuning of an LLM). This, however, does not mean there are no possible solutions. Thus, we did not believe that such a negative result as part of the appendix would contribute anything meaningful to the goal of this paper.\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"title\": \"Withdrawal\", \"comment\": \"We again thank the reviewers for their time and effort to review our paper. We have considered all raised concerns in great detail and addressed each individually. Thanks to the authors' comments, we improved our manuscript in different areas, for which we are grateful.\\n\\nUnfortunately, there was no possibility of discussing our changes in greater detail with the reviewers during the discussion period. Even though we addressed each reviewer's concerns individually and in great detail, we received no further comments regarding our initial responses from multiple reviewers. \\n\\nWe also received a high confidence rejection due to naming the experienced phenomenon we observed as \\u201cbias\\u201d. From the discussion below about this difference in opinions, we firmly believe that the word bias was justified in our case and that we provided strong background references that justify our usage of the word. \\nNevertheless, we are thankful for each reviewer's comments and positive input which helped improve our work.\\n\\nWe have decided to withdraw our submission and are currently working on further improving this work. We aim to publish our work in the future as we firmly believe that our findings present valuable input to the community.\"}",
"{\"comment\": \"Dear Reviewer oDSt,\\n\\nWe deeply hope that our response addressed all of your questions about our submission. We would therefore like to kindly ask if you reconsidered your score.\\n\\nKind Regards,\\nThe Authors\"}",
"{\"comment\": \"Dear Reviewer oDSt,\\n\\nThank you for your time and effort in reviewing our paper. We are happy to see that you find our evaluation setup convincing and that you additionally see our contribution in showing the limitations of LLMs in regard to feature engineering. We also appreciate your valuable critiques, and in an effort to address your concerns, we carefully considered them and aim to address them below.\\n\\nIn your summary, you focus on the predictive performance of the LLM. Similarly, in the weaknesses and questions, you also focus on the predictive performance of the method / LLMs. Our work does not focus on predictive performance. We focus on the bias of the generated output, which, coincidently, results in worse predictive performance. This bias could also exist without a drop in performance (e.g., the LLMs could be good because they engineer too many simple features). \\n\\nTherefore, we would like to know if you have other concerns about our method that determines the bias that prompted your negative assessment of our contribution and work. Specifically, do you see a problem with our analysis of the bias of LLMs? \\n\\n## Weaknesses\\n1. This is not an expected result for us, especially considering prior work and practitioners actively using LLMs for feature engineering. We would be grateful if you could point us to related works that show similar conclusions to those our work has found. \\nTo illustrate, consider the following: there are possible features that can be transformed into new valuable features. These features can be transformed using complex transformations, which are not easy to find except if you understand the context of the data. Now, if an LLM had no contextual understanding of the data (similar to traditional feature engineering methods which do not employ LLMs for context understanding), we would expect this LLM to perform something similar to a random search over all features and operators, as every operator should be similarly weighted at first. However, this is not the case, and the LLM heavily relies on few (simple) operators; without these operators resulting in strong improvement in predictive accuracy. \\nThis is unexpected, especially since our community (e.g., prior work and Kaggle) believes that LLMs are capable of good feature engineering [1], [2], [3].\\n\\n2. Thank you for pointing this out. We have now added an evaluation of statistical significance for the predictive performance using a CD diagram [4] in Figure 9. It clarifies that OpenFE, considering its rank, is superior to all LLM-based methods. Moreover, OpenFE is significantly different from Gemini and GPT-4o-mini. Additionally, it shows the similarities between the capabilities of GPT-4o-mini and Gemini-1.5-flash as well as the slightly better performance of Llama3.1-8b and Mistral-7b-v0.3 which are again similar to each other, as also stated in the paper (Table 3, Figure 5). \\n\\n3. Relating to the above explanation, we would like to stand by the word bias. An unbiased LLM could perform similarly to a random search. It would, therefore, result in a smoother distribution over all operators, which is not the case. The term bias, in this sense, is, of course, only related to the affinity of an LLM towards proposing certain mathematical operators. Furthermore, the word bias has a very clear meaning in the related work and literature related to analyzing LLMs [5].\"}",
"{\"summary\": \"This paper investigates Large Language Models' (LLMs) capabilities in feature engineering for tabular data. The study examines four LLMs across 27 tabular classification datasets that were specifically selected to minimize potential memorization effects. For each dataset, the LLMs were tasked with recursively generating 20 new features, using prompts that contained task context, descriptions of original features, and available operators. The study benchmarks these results against OpenFE, an open-source feature engineering tool, using identical operators and original features. To evaluate the effectiveness of the engineered features, a LightGBM model was trained and tested on datasets combining original and constructed features, with classification accuracy as the performance metric. The results demonstrate that OpenFE produces more consistently beneficial feature sets for classification tasks. Through analyzing operator frequency in feature construction across both LLMs and OpenFE, the authors conclude that LLMs exhibit a bias toward simpler operators when no explicit operator preferences are specified in the prompts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper presents a novel investigation into LLMs' feature engineering capabilities. The authors introduce an innovative evaluation metric\\u2014operator frequency distribution\\u2014which effectively quantifies the patterns in operator selection during feature construction. This metric provides valuable insights into how feature engineering tools, particularly LLMs, exhibit preferences for certain operators under different task contexts and prompt conditions. Furthermore, the study's comprehensive evaluation across 27 tabular datasets, with careful consideration for LLM memorization effects, demonstrates robust experimental design and systematic methodology.\", \"weaknesses\": \"The paper's analysis lacks sufficient depth in several crucial areas. While the proposed operator frequency metric is interesting, it requires further validation in terms of:\", \"effectiveness\": \"There is no analysis comparing the variability and information content of features generated by simple versus complex operators.\", \"fairness\": \"The operator-level analysis overlooks that identical operators applied to different features can yield vastly different outcomes, making tool comparisons based solely on operator frequency potentially misleading.\", \"implications\": \"The study lacks experimental evidence linking complex operator usage to improved classification performance.\\n\\nThe paper's conclusion about LLMs' preference for basic operators requires additional validation. The authors did not explore prompting strategies to encourage complex operator usage, nor did they analyze the specific features and operators suggested by LLMs.\\nThe narrative structure could be improved. For instance, the abstract's discussion of LLM bias in text generation appears tangential to the core focus on feature engineering. Similarly, the section on 'Other Applications of Large Language Models for Tabular Data' would be better integrated into the literature review rather than appearing as a standalone paragraph.\", \"questions\": \"1, Could you clarify the source of the original features\\u2014were they extracted or provided with the datasets?\\n2, Have you considered experimenting with prompts that encourage the use of complex features, perhaps by emphasizing intricate relationships between original features?\\n3, What methods were used to validate the effectiveness, fairness, and implications of the operator frequency metric?\\n4, How did you account for the stochastic nature of LLM responses, where identical prompts might yield different operators and features?\\n5, Would it also be informative to evaluate model performance using only the generated features, excluding original features? Maybe you can try this.\\n6, Have you conducted feature-level analysis of the constructed features? Specifically:\", \"classification_performance_level\": \"Identifying dominant features in both LLM and OpenFE-generated sets\", \"feature_level\": \"Analyzing the characteristics of successful versus unsuccessful generated features\\nCombining classification-level, feature-level, and operator-level analyses to strengthen conclusions about LLMs' feature engineering capabilities.\\n7, A potential typo in Hypothesis 1: \\u201cHYPOTHESIS 1: FEATURE ENGINEERING WITH LARGE LANGUAGE MODELS IS BIASED TOWARD SIMPLE OPERATES.\\u201d The last word should be \\u201cOPERATORS\\u201d?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors investigate how well LLMs can engineer features for tabular datasets. Specifically, they look at the frequencies of operators, and find that there is bias toward simpler features rather than more interesting or useful ones. They also evaluate on the downstream accuracy of the models trained with and without the engineered features.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It is good to see more examples of evaluation of downstream LLM tasks \\\"in the wild\\\".\\n\\nI appreciate that the authors were rigorous in removing datasets that were thought to be memorized or in the training data of the LLM, even though they did not have access to the training data itself.\", \"weaknesses\": \"To me, this doesn\\u2019t seem like an influential enough contribution. Not only is it tackling a very narrow problem, but it is also only evaluating a specific method for addressing that problem. While there is some prior work around using LLMs for feature engineering, I\\u2019m not convinced that this work\\u2019s method for feature engineering is necessarily representative of all the methods for using LLMs for this task.\\n\\nSpecifically, the authors only use one prompting strategy, on a snapshot of models at the current time that this paper is being written. A few examples of people using LLMs for feature engineering are cited (Hatch, 2024; T\\u00fcrkmen, 2024), but it is unclear what methods these citations used\\u2013 is the author\\u2019s method the same, or inspired by them? Should data scientists conclude from this paper that they should never use LLMs for feature engineering, even if they use a different method? Overall, I think this is an interesting use case to evaluate, but the work is not broad enough to be included in ICLR.\", \"nits\": \"\", \"typo\": \"\\u201cwhich is send to the LLM\\u201d \\u2192 sent\", \"questions\": \"I\\u2019m a little confused about the experimental setup regarding operations. If I understand correctly, the authors are comparing the distribution of operators generated by the LLM and by OpenFE. If OpenFE is considered ground truth, why not compare directly to OpenFE final generated feature set? For example, rather than just counting the number of times we see the operation \\u201cGroupByThanRank\\u201d, why not look at the original features that were input into this operation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"## Questions\\n1. Refer to the explanation above, but in short, we consider semantic information an important addition to tabular datasets. Existing SOTA methods are incapable of incorporating this information. LLMs are currently the best alternative we have at hand to process semantic information and gain some form of reasoning capabilities, which can be valuable when considering a tabular dataset in light of its semantic description.\\n2. Refer to the explanation above, but in short: methods that employ LLMs for the given task exist in multiple forms and by different users, so there is a real application of this right now. Therefore, an existing bias that might mitigate the capabilities of these approaches is essential information to be made aware of when working on/with such methods. \\n\\n## Small Issues\\n1. Thank you for highlighting this, we have improved the explanation in the paper. (L353-354) \\n2. Regarding the random experiment: Our primary motivation was to strengthen the validity of our prompting approach further. We also masked the names of the operators in this experiment. The idea behind this experiment was to mitigate the possibility of this bias not because the LLM is biased towards some operators but because it just always selects the first operator that comes in the list of possible operators in the prompting template. That is why we masked the names to solidify that the LLM can pick operators from every position in the prompt and that our experienced bias is not just a (strong) positional bias. We also agree that the statement was too strong, so we have toned it down and made it more specific to what the experiment shows. (Refer to Lines 434-435 and 448-449)\\n3. Thanks! \\n\\nWe sincerely hope that this response helps in answering your questions and concerns. If you have any more questions or concerns regarding our work, we are happy to answer them as well. \\n\\nFurthermore, we thank you for the insightful discussion and for giving us the opportunity to clarify and defend our contribution. We would be grateful to hear any further comments about the motivation of our work. \\n\\nThank you very much, and kind regards,\\nThe Authors\"}",
"{\"title\": \"Response Reviewer 31oL [1/2]\", \"comment\": \"Dear Reviewer 31oL,\\n\\nThank you for reviewing our paper. We are excited to see that you see our evaluation pipeline's robustness and systematic nature. We also noticed your critiques and will address them below.\\n\\nIn summary, your review focuses on the alleged lack of depth, inadequate validation of the operator frequency metric, unsubstantiated conclusions, and a narrative structure that detracts from its overall coherence. We hope that we understand your concerns correctly. If not, we encourage you to reiterate your concerns and explain them to us in greater detail.\\n\\nAssuming we understand your concerns correctly, we will address your weaknesses.\\n\\n## Weaknesses\\n> Effectiveness: There is no analysis comparing the variability and information content of features generated by simple versus complex operators\\n\\n\\nWe consider an in-depth analysis of features generated by simple vs. complex operators out of scope for this work. The scope of our work is to detect a bias. As we show, it was sufficient to compare operators used by existing automated feature engineering methods with context-aware feature engineering methods (using LLMs). We do not see how characteristics of the features would support our claim, especially as this information is conditional on the datasets and not comparable between multiple datasets. \\n\\nFrom comparing the operator distributions, we were able to see a clear disconnect between the operators selected by OpenFE and the ones selected by the LLMs. This leads to the second hypothesis of this paper, which is that this bias negatively impacts the value of features (in a general case). This was apparent through comparing predictive accuracy results where LLM-based methods were constantly subpar to non-LLM-methods (OpenFE).\\n\\n> Fairness: The operator-level analysis overlooks that identical operators applied to different features can yield vastly different outcomes, making tool comparisons based solely on operator frequency potentially misleading\\n\\nWe ask you top clarify how the described problem is a fairness problem or potentially misleading. \\n\\nOur conclusion is not affected by which features are selected, because the bias exists no matter the features (the prompt does not change based on the select feature) and the performance is a suitable indicator for the impact across many datasets. Moreover, we show that the feature selection part of our pipeline has no significant impact, see Appendix G.3. \\n\\n> Implications: The study lacks experimental evidence linking complex operator usage to improved classification performance.\\n\\nWe do not claim anywhere in the paper that complex operators lead to improved classification performance. This would also be a false implication, as complex operators do not necessarily lead to better performance in all cases. We solely claim that when considering the distribution of operators of OpenFE (which has a notably higher usage of complex operators) and comparing it with those of an LLM-based method (which in some cases uses almost no complex operators at all), the outcome in predictive accuracy seems to be on average superior when spreading better across all types of operators. In other words, we claim that there are situations in which a complex operator may be useful, which the LLM fails to realize as it almost never suggests complex operators as useful transformations.\\n\\n> The authors did not explore prompting strategies to encourage complex operator usage, nor did they analyze the specific features and operators suggested by LLMs.\\n\\nWe considered this point and added Figure 11 in Appendix G.3, comparing which features the LLM selected. From this figure, we conclude that the LLM is certain which features it considers important for feature engineering in most cases; however, it fails to try different combinations of these features with different operators.\\n\\nOur work explicitly calls for this as future work; we further clarified this in the updated manuscript. Moreover, we unsuccessfully tried different prompting strategies to avoid bias in our own applications (as detailed in another response to a review above).\"}",
"{\"comment\": \"Dear Reviewer JGk8,\\n\\nWe highly appreciate your effort to review our paper, and thank you for the valuable feedback. We thank you for your positive feedback on the precision of our work, specifically highlighting our action of memorization tests before evaluating LLMs.\\n\\nWe carefully considered the concerns you raised in your review and aim to address them below.\\n\\n## Weaknesses\\n\\n> \\u201c tackling a very narrow problem\\u201d\\n\\nFeature engineering is a highly important problem for tabular data. Tabular data problems are omnipresent in real-world applications, especially in industry. Thus, while we agree that this is a narrow problem from the point of view of the LLM space, our work tackles a major problem in the world of tabular data\\u2014which is the focus of our work. Improving LLMs specifically for downstream tasks related to tabular data can have major positive implications for many applications. \\n\\n> \\u201c, but it is also only evaluating a specific method for addressing that problem\\u201d\\n\\nAs described in Section 3 Stage B), the method we created was chosen primarily to enable our analysis (e.g., by avoiding code generation failure). Our work tries to analyze the world knowledge of LLMs and not a specific method for feature engineering. Our method enables such an analysis, and we would appreciate specific feedback as to why it would not be representative of the world knowledge of an LLM. \\n\\n> \\u201c..., on a snapshot of models\\u201d\\n\\nWe used some of the most well-known models. OpenAI's GPT models are especially heavily used in most work with LLMs as a benchmark, specifically in related work. Data Science tasks with LLMs usually use GPT4 and Llama models as a benchmark. Hence, the evidence that these models induce such a strong bias can be valuable information for fellow researchers. Furthermore, we added an additional experiment with GPT4o in Figure 10, one of the strongest models on the market (according to LLM arena [1]), which also exhibits the bias presented in our work. We were not able to use Claude due to rate limiting of the API. \\n\\n> *A few examples of people using LLMs for feature engineering are cited (Hatch, 2024; T\\u00fcrkmen, 2024), but it is unclear what methods these citations used*\\nWe specifically refer to these authors as both rely on CAAFE [2] (which we also refer to in the related work section) for well-known Kaggle competitions. Both these citations strengthen our motivation that automated feature engineering methods with LLMs are used in practice. In academia, we can see CAAFE being used or built upon in several related works, such as DS-Agent [3], FELIX [4], or ELF-Gym[5]. In short, CAAFE is, to the best of our knowledge, the most prominent feature engineering method with LLMs. \\n\\nCAAFE is especially relevant to us as this method also follows an approach where the LLM generates new features from existing ones by applying operators on these existing features. The core approach is highly similar, but to enable our analysis, we decided to let the LLM provide us directly with the operators it would like to apply rather than parsing the generated Python code (CAAFE\\u2019s method of feature generation).\\n\\nWe adjusted the paper to reflect this connection better and highlight CAAFE in greater detail; refer to lines 89-93.\\n\\n> *Should data scientists conclude from this paper that they should never use LLMs for feature engineering, even if they use a different method?*\\n\\nIn our opinion, data scientists should conclude from this method that LLMs can be employed for feature engineering; however, when employing the LLM to directly generate new features from existing ones with mathematical operators (like CAAFE, which is used in practice as followed from the above talked about citations), one has to consider that this is biased and the LLM might fail to find more complex solutions as it heavily relies on simple operators. Therefore, our work shows the tabular community that new research efforts to further strengthen LLMs are needed, genuinely enabling the use of LLMs for tabular downstream tasks. Likewise, it shows the LLM community that we require methods to explore and avoid bias when using LLMs for downstream tasks. \\n\\nWe now further clarify this call for action and takeaway in our conclusion.\\n\\n> \\u201cNits: Typo: \\u201cwhich is send to the LLM\\u201d \\u2192 sent\\u201d\\nThank you for pointing this out; we fixed it in the updated paper version. Ref: L:732\"}",
"{\"title\": \"Re: Exploring Prompts\", \"comment\": \"Dear Reviewer 31oL,\\n\\nThank you for your question! We have tried to address the bias we have found in our systems and applications but have not found a solution that avoids the bias. We unsuccessfully tried in-context learning (plus prompt tuning as part of this) and fine-tuning an LLM. This, however, does not mean there are no possible solutions. Thus, we did not believe that such a negative result as part of the appendix would contribute anything meaningful to the goal of this paper.\\n\\nIn essence, our work is a call to action for the community, related work, or concurrent work. Likewise, it serves as a warning to practitioners or Kaggle experts actively using or considering using LLMs for their applications. We believe our contribution highlights the unexpected existence of a bias that prompts poor performance, which is valuable research. \\n\\nKind regards,\\nThe Authors\"}",
"{\"title\": \"Response Reviewer 31oL [1/2]\", \"comment\": \"> The narrative structure could be improved. For instance, the abstract's discussion of LLM bias in text generation appears tangential to the core focus on feature engineering. Similarly, the section on 'Other Applications of Large Language Models for Tabular Data' would be better integrated into the literature review rather than appearing as a standalone paragraph.\\n\\nThese are two points on which we do not understand your concerns. In a paper where the main focus of the work is a bias in LLMs, we consider biases in LLMs for text generation important related work as it lays a foundational understanding of bias in LLMs. For the second part, it is obvious from the paper that the paragraph \\u201cOther Applications of Large Language Models for Tabular Data\\u201d is part of the related work. Please reiterate what you mean by \\u201c[...] would be better integrated into the literature review [...]\\u201d and how the narrative structure, specifically in the context of our work, can be improved. \\n\\n# Questions\\n1. We do not entirely understand what you mean by this question. For benchmarking, we used well-known tabular datasets from the AutoML benchmark [1], which consist of features and data points. So, the original features are already part of the dataset. \\n2. No, we have not done this yet. This is mainly due to two factors. First, we wanted an unbiased evaluation where every operator is weighted equally so we could see the true distribution of the LLMs. Second, we expect the LLM to know the correct operators in fitting cases. As stated above, complex operators may not always be a useful solution. However, we want the LLM to know when to select complex operators and when not.\\n3. Refer to the Weakness section, where we explain our opinion on all of the stated problems.\\n4. Unfortunately we do not entirely understand what you mean by this question. We don't expect the LLM to be deterministic at all (e.g. always yield the same solution for an example dataset incorporated into our prompt). We just want to evaluate whether the LLM is able to also propose complex operators. This is also why we repeated our experiments across many datasets and repeated prompts per dataset.\\n5. In our opinion, this would not be useful as we are not interested in finding the optimal feature combination when predicting on a given task using solely this newly generated feature, nor do we expect the LLM to find this. What we consider here is a data science expert wanting to employ an LLM for automated feature engineering on a given tabular data problem, a setting present in well-known methods like CAAFE [2], DS-Agent[3] or FELIX[4]. In this case, the expert would expect the LLM to find a fitting solution for the given table (which the expert would probably not drop entirely as this would lose information).\\n6. As shown in the plots in our results, we have conducted operator level analysis (Figure 2, Figure 3, Figure 4), and feature level analysis (Figure 11), as well as classification level analysis (Figure 5, Table 3)\\n7. Thank you! We have fixed this in our revised paper version.\\n\\n[1] Gijsbers, Pieter, et al. \\\"An open source AutoML benchmark.\\\" Journal of Machine Learning Research 25.101 (2024), https://arxiv.org/abs/2207.12560\\n[2] Hollmann et al. \\u201cLarge Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering\\u201d, Conference on Neural Information Processing Systems (2023), https://arxiv.org/abs/2305.03403\\n[3] Guo et al. \\u201cDS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning\\u201d, ICML (2024), https://arxiv.org/abs/2402.17453\\n[4] Malberg et al., \\u201cFELIX: Automatic and Interpretable Feature Engineering Using LLMs\\u201d, Joint European Conference on Machine Learning and Knowledge Discovery in Databases (2024), https://link.springer.com/chapter/10.1007/978-3-031-70359-1_14\"}",
"{\"comment\": \"## Questions\\n1. See Figure 9. \\n2. This was mainly because we wanted to use publicly available API endpoints and LLM providers, as most practitioners usually do. As a large-scale evaluation was important to us to have a strong foundation for our results, the API costs for more powerful models would not be feasible for our evaluation. However, we updated our submission to provide Appendix G.1 Figure 10, where we benchmarked our idea on a subset of datasets and compared the distributions of operators between GPT-4o-mini and the more powerful GPT-4o. As apparent in this figure, the bias is similar, and especially for the GPT models, the fact that they have a very limited range of operators they select at all and still do not select complex operators remains the same.\\n\\nWe sincerely hope that this response helps in answering your questions and concerns. If you have any more questions or concerns regarding our work, we are happy to answer them as well. \\n\\nFurthermore, we would be grateful for your opinion on our core contribution, the method for determining bias, and how we could improve it. \\n\\nThank you very much, and kind regards,\\nThe Authors\\n\\n## References\\n[1] Hollmann et al. \\u201cLarge Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering\\u201d, Conference on Neural Information Processing Systems (2023), https://arxiv.org/abs/2305.03403\\n[2] Han et al. \\u201cLarge Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning\\u201d (2024), https://arxiv.org/abs/2404.09491\\n[3] Zhang et al, \\u201cELF-Gym: Evaluating Large Language Models Generated Features for Tabular Prediction\\u201d, Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, (2024), https://dl.acm.org/doi/abs/10.1145/3627673.3679153\\n[4] Dem\\u0161ar, Janez. \\\"Statistical comparisons of classifiers over multiple data sets.\\\" The Journal of Machine learning research 7 (2006): 1-30.\\n[5] Navigli et al, \\u201cBiases in large language models: origins, inventory, and discussion\\u201d ACM Journal of Data and Information Quality (2023), https://dl.acm.org/doi/10.1145/3597307\"}",
"{\"summary\": \"The paper tests LLM bias in the task of feature engineering for training a downstream learning system. The paper uses several LLM settings across 27 datasets, to demonstrate that LLMs do indeed have bias for this task, which indeed seems to lead to poor performance compared to an existing SOTA automated feature engineering solution. Some further discussion and experimentation into the properties of the bias shows that the LLM seems to prefer simpler features when the features have meaningful names, but also doesn't sample uniformly when the features have nondescript random names.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is based on solid experimental work, testing using several LLMs and across many datasets, testing for memorization issues separately to check for bias explicitly.\\nThe paper is an interesting and easy to follow read. Problematic properties of LLM solution paths for different problems are always appreciated, as we develop more and more systems that significantly rely on this tool, we must strive to understand the biases this seemingly easy fix-all solution of asking an LLM brings into our work and the times it might fail completely. It is also interesting that the large models have failed worse at adding features that helped with the downstream system's results compared to the smaller models, which did help a little.\", \"weaknesses\": \"The main issue with this paper is that it is rather unclear why the usage of LLMs for this task was explored at all. It seems that when feature engineering is done by an LLM, the downstream system's performance is worse than existing SOTA systems - and sometimes even worse than doing any feature engineering at all. Frankly, it's also not a task that I would intuitively expect LLMs to be good at, as general knowledge, common sense and language knowledge is probably not what humans would use for feature engineering, but rather math/engineering skills and perhaps expert domain knowledge or lived experience - all usually less strong qualities of LLMs. The paper does not call this issue out or justify it. Usually, checking for biases of a solution might have one of two purposes: 1. call out poor performance that happens in a way that isn't expected or measured in other ways, so for example, if the system had seemingly good downstream performance, checking for biases or other issues might help guard us from using a problematic solution that looks good in our metrics. 2. try to improve the performance of the biased system by somehow mitigating the bias. It seems that option 1 in this case is unnecessary, since the LLMs have worse performance, and no actual attempt is made towards the #2 target.\", \"questions\": \"1. Why would I use an LLM for feature engineering anyway, if there are existing SOTA automated systems that already do it and perform much better?\\n2. If your answer to #1 is that I probably wouldn't, then the main question about publishing this paper would be - why would I read a paper about the biases such a solution might have? There could be several answers (e.g., to inspire a similar analysis of LLM solutions for other problems) but they need to be clear within the paper.\", \"small_issues\": \"1. Please explain that the improvement in \\\"Predictive Performance Improvement\\\" is improvement compared to a system without FE earlier in the document, e.g. before table 3.\\n2. While the random experiment is fun and adds to the paper, I don't think it is at all accurate to say that it tests \\\"whether our prompt template influenced our results\\\" - seeing as the prompt template itself did not change in this experiment, only the names of the features. I don't think it shows anything about the prompting strategy - but rather that the nature of the bias depends on the feature naming. \\n3. Caught typo: The API usage costed -> cost\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer JGk8,\\n\\nWe hope that our responses were able to address your concerns and answer your questions regarding our submission. We would kindly ask if they led you to reconsider your score. Please let us know if further clarifications are needed.\\n\\nKind Regards,\\nThe Authors\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"I thank the authors for responding to my comments, especially, for providing statistical significance results and providing additional references.\\n\\nI looked at the paper \\u201cBiases in large language models: origins, inventory, and discussion\\u201d and the primary sense of the word \\\"bias\\\" in the paper is \\\"social bias\\\". This was my point. \\n\\nThe authors have asked \\\"We would be grateful if you could point us to related works that show similar conclusions to those our work has found.\\\"\\n\\nI will encourage the authors to look at McCoy, R. T., Yao, S., Friedman, D., Hardy, M., & Griffiths, T. L. (2023). Embers of autoregression: Understanding large language models through the problem they are trained to solve. This paper shows the many of the surprising behaviors or hallucinations or \\\"biases\\\" of LLMs can be traced back to the training data. My hypothesis is that simple operators abound in web data and this is mirrored in the output. I do not have any other paper in mind.\\n\\nHaving read the thoughtful response and being appreciative of it, I am keeping the scores the same.\"}"
]
} |
|
1JgWwOW3EN | BenchMol: A Multi-Modality Benchmarking Platform for Molecular Representation Learning | [
"hongxin xiang",
"Ke Li",
"Zhixiang Cheng",
"Linlin Hou",
"Jun Xia",
"Wenjie Du",
"Li Zeng",
"xiangxiang Zeng"
] | Molecular representation learning (MRL) plays a vital role in high-precision drug discovery. Currently, people represent molecules in different modalities (such as sequences, graphs, and images), and have developed many MRL methods. However, three key challenges hinder further progress in the field of MRL: (i) Lack of systematic and unified evaluation on models of different modalities, resulting in unfair comparisons or being affected by randomness; (ii) The specific advantages between different molecular modalities are unclear; (iii) Lacking a unified platform to integrate data of different modalities and a large number of MRL methods. Therefore, we propose the first MRL platform supporting different modalities, called BenchMol, to integrate a large number of sing-modal MRL methods with different modalities and evaluate them systematically and fairly. BenchMol has four attractive features: (i) Rich modalities: BenchMol supports 7 major modalities of molecules, such as fingerprint, sequence, graph, geometry, image, geometry image, and video; (ii) Comprehensive methods: BenchMol integrates 23 mainstream MRL methods to process these modalities; (iii) New benchmarks: BenchMol constructs two new benchmarks based on PCQM4Mv2 and ChEMBL 34, called MBANet and StructNet, for a more systematic evaluation. (iv) Comprehensive evaluation: evaluation covers different aspects of molecules, such as basic attributes and molecular types. Through BenchMol, we conduct large-scale research on methods of different modalities and report many insightful findings. We hope that BenchMol can help researchers quickly use MRL methods with different modalities on the one hand; and on the other hand, provide meaningful insights into multi-modal MRL and help researchers choose appropriate representations in downstream tasks. We open-sourced BenchMol in \href{https://anonymous.4open.science/r/BenchMol}{Github}. | [
"Multi-Modality Learning",
"Benchmarks and Datasets",
"Drug Discovery",
"Molecular Representation Learning"
] | https://openreview.net/pdf?id=1JgWwOW3EN | https://openreview.net/forum?id=1JgWwOW3EN | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"sXxe3BIhiK",
"rML1yCQUYR",
"qBQwijyVYd",
"cIBXDywtEY",
"WW0jQztxEg",
"UXkDsqEdAj",
"PX6PZKgtba",
"8n7xIqM4UH",
"1Xl9B9iHAZ"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1730705617557,
1730363264539,
1737272306807,
1731059463053,
1732329107944,
1732551974153,
1730758158108,
1732427577026,
1730905880890
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission714/Reviewer_7P4y"
],
[
"ICLR.cc/2025/Conference/Submission714/Reviewer_LcfM"
],
[
"ICLR.cc/2025/Conference/Submission714/Authors"
],
[
"ICLR.cc/2025/Conference/Submission714/Reviewer_CyMa"
],
[
"ICLR.cc/2025/Conference/Submission714/Authors"
],
[
"ICLR.cc/2025/Conference/Submission714/Reviewer_CyMa"
],
[
"ICLR.cc/2025/Conference/Submission714/Reviewer_Sees"
],
[
"ICLR.cc/2025/Conference/Submission714/Reviewer_LcfM"
],
[
"ICLR.cc/2025/Conference/Submission714/Reviewer_KqA8"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces BenchMol, a comprehensive and multi-modality platform specifically designed for molecular representation learning. TThe authors introduce two novel datasets and corresponding benchmarks, namely MBANet and StructNet on newly defined MRL tasks.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The author conducts extensive experiments in evaluating property prediction performance of MRL methods on existing datasets.\\n\\nThe paper re-evaluate a large amount of existing molecular representation methods on moleculeNet.\", \"weaknesses\": \"1. The authors propose a multi-modality benchmarking platform, yet the study predominantly focuses on the performance comparison of single-modality molecular representation learning\\nmethods and missing multimodal molecular representation learning methods (E.g. [1]), which is a critical weakness point considering the data scope as introduced in this paper. \\n\\n2. The rationale for providing a multi-modality dataset that compares single modality MRL methods is not clear, given that existing packages such as RDKit and OpenBabel already facilitate the conversion between different modalities for a given molecule(E.g converting SMILES to 2D molecular graph). This raises questions about the contributions of the proposed benchmarks compared to readily available tools. \\n\\n3. It\\u2019s better to demonstrate what kind of research gap in machine learning for chemistry this paper is trying to address. What certain type of chemistry questions is this paper trying to address, that may benefit the AI4Science community. For example, in section E, what specific chemistry problems does the atom distribution prediction task try to solve? How does a correction prediction of the atom distribution can benefit the chemistry community? \\n\\n4. The provided link for accessing the datasets is currently non-functional, as attempts to access the OneDrive URL listed in the README file result in a 'This site can\\u2019t be reached' error. Therefore, I am not able to reproduce some of the experiments. \\n\\n---\\n\\nMinor concerns. \\n\\n5. The presentation of Figures S3 (Pg. 21) is somewhat disorganized, notably, the font size on the x-axis of figure c and f is inconsistent with the rest. \\n\\n6. The organization of the manuscript could be improved for better readability; specifically, the description of the molecular print method is positioned on Page 24, while other molecular MRL methods summarized on Page 6. In addition, it is better to put a reference or hyperlink of the MRL method within each table. \\n\\n---\\n\\nFor improving this dataset and benchmark paper, [2] can be possibly considered as a reference. \\n\\n[1] Wang Z, Jiang T, Wang J, et al. Multi-Modal Representation Learning for Molecular Property Prediction: Sequence, Graph, Geometry[J]. arXiv preprint arXiv:2401.03369, 2024.\\n\\n[2] Velez-Arce A, Huang K, Li MM, Lin X, Gao W, Fu T, Kellis M, Pentelute BL, Zitnik M. TDC-2: Multimodal Foundation for Therapeutic Science. bioRxiv [Preprint]. 2024 Jun 21:2024.06.12.598655. doi: 10.1101/2024.06.12.598655. PMID: 38948789; PMCID: PMC11212894.\", \"questions\": \"My questions is listed in the Weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes a multi-modality molecular property benchmark for molecular representation learning (MRL) methods. It generates various modalities, including fingerprint, sequence, graph, geometry, image, geometry-based image, and video, and constructs new benchmarks using data from PCQM4Mv2 and CHEMBL 34. A range of single-modality methods are evaluated on both MoleculeNet and the newly constructed benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper is well-written and easy to understand.\\n2. The multi-modality methods are extensive, covering a broad range of modalities.\\n3. The conclusions from the linear probing experiments on different modality methods in MoleculeNet (Section 5.2) are insightful and interesting.\", \"weaknesses\": \"1. The labels in MBANet, consisting only of atom types, bond types, and basic molecular attributes, are relatively simple and lack practical value for a comprehensive molecular property benchmark.\\n2. I am curious about the rationale for splitting the data into different types (such as acyclic, complete chain, acyclic chain, macrocyclic peptide, macromolecule, and reticular) based on their 2D structural patterns. This approach implies an assumption that these distinctions are meaningful and that different modality models would clearly favor specific 2D graph patterns. However, the performance differences among various modality methods in Table 6 are minor and do not reflect the significance of such a split.\\n3. The molecular image and video modalities are generated from 2D or 3D structures. It would be helpful to clarify why these modalities are important and which tasks specifically benefit from such artificial representations.\\n4. Why do 3D modality-based methods, such as Uni-Mol, outperform other modalities on MoleculeNet tasks? Are there any insights or reasons behind this?\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely thank all the reviewers for spending their valuable time reading our paper and providing constructive feedback. We appreciate the insights shared, which we believe will help strengthen the core aspects of the paper. We also thank the reviewers for their positive comments on this paper, which we find encouraging. We are committed to carefully considering these comments and making necessary improvements to the presentation of the paper and the key research proposed.\\n\\nAfter careful consideration, we have decided to withdraw this paper from consideration. Thanks again to the reviewers and the Associate Editor for their time and feedback.\"}",
"{\"summary\": \"This study is a comprehensive examination of molecular representation learning (MRL) methods benchmarking, covering multiple molecular modalities (1D string representation, 1D fingerprints, 2D Graphs, 2D Images, 3D geometries, 3D geometry images, and video.\", \"the_study_proposes_3_separate_sets_of_datasets_to_evaluate_all_these_modalities\": \"MoleculeNet (pre-existing), MBANet (newly created), and StructNet (newly created). The first benchmark covers broad application in the biomolecular domain, the second benchmark evaluates the ability of MRL methods to capture basic molecular attributes, the third benchmark allows for discerning which MRL are more appropriate for which molecular types.\\n\\nThe study evaluates multiple MRL techniques and pre-trained models and draws 9 main insights from this extensive and large-scale examination, which I summarise as follows:\\n\\n1. All modalities are useful for almost every task (models from 6 modalities are the top 6 in performance).\\n2. Ensembling multiple conformers can improve image-based MRLs.\\n3. Sequence-based models (transformers and similar architectures) perform well even when randomly initialised and without fine-tuning, which suggests that they have good inductive biases.\\n4. Geometry images and videos are resilient to different image formatting conventions (RGB vs BGR).\\n5. Video modality is the best for recovering basic molecular information (MBANet benchmark).\\n6. Pre-training models improve performance on recovering basic molecular information (MBANet), therefore, pre-training tasks are useful for this purpose.\\n7. Performance on MBANet within MRLs leveraging the same modality is similar\\n8. Modality determines whether the model is best performing at different types of molecules (StructNet benchmark).\\n9. Certain pre-trained models perform worse against certain molecular types than their randomly initalised counterparts. Therefore, certain pre-training tasks might be better suited for certain molecular types and will be detrimental for others.\\n\\nFinally, the study presents tools for standarising multi-modal benchmarking on these datasets, provides splits for replicating results and utilities that are likely to accelerate multi-modal representation learning strategies.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Main strengths\\n---\\n1. The paper tackles a really important issue in molecular representation learning, provides a sound and comprehensive benchmark of existing methods, provides a clear way to compare the strengths and weaknesses of available MRL techniques and allows for a clear and fair comparison across modalities. Further they provide the utilities for reproducing their results easy to use. This research will significantly move the field forward and constitutes a strong baseline from which new multimodal research can build upon.\\n2. The paper is clearly written, results are concisely conveyed, the methodology is sound and detailed enough as to allow for the reproduction of the results.\\n3. Tables and data displayed clearly demonstrate and support the main claims and insights drawn by the authors.\\n4. Supplementary information is rich and comprehensive.\\n\\nMinor strengths\\n----\\n_(Details of the paper that do not have a direct bearing on my evaluation, but I think are commendable)_\\n\\n1. The insight regarding sequence-based models having enough inductive bias even when randomly initialised and with linear probing is highly interesting and could merit its further exploration.\\n2. The experiments with multiple conformations for the image modality are really interesting, the insights drawn highly informative, and they go beyond what the scope of the paper was to give a really comprehensive evaluation of the benefits and idiosyncrasies of different modalities.\\n3. Visual design in the Figures and Tables is crisp and facilitates the comprehension of the paper.\", \"weaknesses\": \"Main weakness\\n---\\nIn Appendix B1, Figure S1, the histograms for the atom counts of certain atoms like Si (c), Br (f), P (g), S (h), Cl (i), B (j), Se (k), and particularly Ge (l), seem to be completely skewed and quite limited in the independent variable values (0, 1, 2). It seems that they'd be better suited for a classification task. I'd argue that the Ge task introduces only noise to the final metric as there is only one count of value 1, the rest are value 0.\\n\\nTherefore, it will either be in training and will not be tested; or it will be in the testing and the model will not know that such a value is even possible. I see that the issue of transforming them into classification tasks would be that the the average of the classification and regression metrics would not make that much sense and this could be alleviated by using correlation metrics like Matthew's correlation coefficient (or a multi-class generalisation thereof) for classifcation, and Spearman's or Pearson's correlation coefficient for regression. Another alternative, probably simpler alternative, could be to remove the subtasks that are too skewed to be useful. I am not sure which option is best and I am open to the authors to explain their rationale for including them in their study. I think that, the limitation of this specific part of the benchmark should be at least acknowledged in the main text. \\n\\nThis is also applicable to Figure S2 - b.\\n\\nThis is the only major weakness in an otherwise excellent study.\", \"questions\": \"1. The experimental results with the randomly initialised sequence-based models are quite intriguing and seem a bit counterintuitive, particularly as it pertains the linear probing, do you have any further intuition of what may be the underlying mechanism that provides them with such a remarkable inductive bias. Have you seen any dependence between model size and model performance in this particular setting?\\n2. Some datasets can be quite dirty, a single molecule can be translated into multiple SMILES strings depending on the specific algorithm used, this leads to some datasets having the same molecule duplicated, but with different SMILES which makes it difficult to distinguish. Have you done any tests to detect these duplications (e.g., canonicalising the SMILES or using InChIKeys)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"10\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"General Response to All the Reviewers\", \"comment\": [\"First, We sincerely thank all the reviewers for their insightful and constructive feedback on our manuscript. We are happy to hear that **our benchmark platform is comprehensive** ($\\\\frac{4}{5}$ CyMa, Sees, 7P4y, LcfM), and **our experiments are interesting, insightful, and surprising** (reviewers: $\\\\frac{4}{5}$ CyMa, Sees, LcfM, KqA8). In addition, They think **our paper is well-written and easy to understand** ($\\\\frac{2}{5}$ CyMa, LcfM).\", \"Second, we have substantially revised the paper and all revised content are highlighted in red. We summarize the primary revisions made to the paper:\", \"**In-depth Analysis of Insights:** We conducted extensive and substantial in-depth analysis of the presented insights, which are included in **Appendices K.1-K.8** to provide further insights.\", \"**Clarification of Multi-Modality:** Our multi-modality only describes that the number of modalities supported by BenchMol is large and the goal of the study is to study the differences between different modalities. Since the word \\\"multi-modal\\\" can easily lead to misunderstandings about the evaluation of multi-modal methods, we have revised a lot of multi-modal vocabulary in the paper;\", \"**Title Correction:** We changed the title to \\\"Benchmarking Single-Modal Molecular Representations Across Diverse Modalities\\\" to avoid misunderstanding of the paper.\", \"**Clarity of Visual Modality:** We rewrote the definition of visual modality in Section 3 and added more details of visual rendering and visual importance in **Appendix B and Appendix C.3** respectively.\", \"**Motivation and Potential Impact of BenchMol:** We discuss it in **Appendix C.1**.\", \"**Motivation, Practicality and Limitation of Benchmarks:** We discuss it in **Appendix C.2 and D.1**.\", \"**Extension of MBANet Scale:** We expand the data scale of MBANet in the experiment to reflect the robustness of the conclusion.\", \"**Computational Cost:** Additional information on model size, training cost, and inference speed has been included in **Appendix J**, providing more insight into the practicality and efficiency of different approaches.\", \"**Presentation Corrections:** We addressed disorganized presentation to improve overall readability and clarity.\", \"Third, we have carefully addressed all the reviewers' comments and provided detailed responses to each point. The revised manuscript has been uploaded. If there are any further questions or concerns, please don\\u2019t hesitate to reach out. We remain committed to improving the quality of this work and welcome further discussions.\", \"Thank you once again for your valuable feedback and support!\"]}",
"{\"comment\": \"I thank the authors for their prompt response to my concerns. I particularly appreciate the follow up studies on the potential mechanism behind the inductive bias and the impact of model size on performance. Regarding the inductive bias, I personally am not familiar with the Davis-Bouldin Index and think that a brief introduction would be useful. Regarding the number of layers, the results in Table S49, I do not think that the conclusion that perfomance increases as number of layers increases is well supported as number of layers 4 is greater than 6, though I agree that there is an upwards trend, I would recommend softening the statement.\\n\\nI also disagree with the authors in the inclusion of the skewed distributions as 1) I don't think it shows meaningful learning to just repeat 0, a blind function that always outputs 0 would outperform anything in that setting, 2) the argument that regression allows for extrapolation I think is not supported by common wisdom in the field, the deeper a network is the more it will reproduce the data distribution it has trained on, if value 10 never appeared in its training set it is highly unlikely that it will know to extrapolate to it. In practice, I don't think that regression offers any significant advantage over classification with regards to extrapolation. I find the argument in favour of RMSE compelling.\\n\\nOverall, I stand by previous assessment of the quality of the paper. It is a strong work that fills a necessary gap in the literature by offering a comprehensive cross-modal evaluation of state-of-the-art models and provides substantial baselines upon which to build new solutions. The wrappers for the multiple modalities provide an easy interface to facilitate calculation, but I do not think they are significant technical contributions. The analyses of the multiple modalities and the effects on performance are through. The points I'm discussing with the authors are, in my opinion, minor and the authors are right that they do not have a significant impact on the validity of the results, I think they formal aspects that are open to different interpretations. \\n\\nFinally, I'd like to congratulate the authors for their outstanding working.\"}",
"{\"summary\": \"The paper introduces BenchMol, a comprehensive and unified platform for molecular representation learning (MRL). BenchMol integrates seven major molecular modalities (fingerprint, sequence, graph, geometry, image, geometry image, and video) and evaluates 23 mainstream MRL methods across various tasks. The authors propose two new benchmarks, MBANet and StructNet, to systematically assess the performance and preferences of different modalities. Through extensive experiments, the paper provides findings into the strengths and weaknesses of various modalities and their suitability for different molecular types and tasks.\\n\\nThe paper is in general interesting to read, while there are a few concerns that need to addressed.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper gives a good survey for MRL methods using different modalities of molecular data.\", \"The paper proposes two benchmark datasets and tests various methods on them. Interesting conclusions are made according to the results.\", \"Given the numbers of experiments the paper have conducted, it is evident that the authors have put numerous efforts into this work, unifying code from different methods.\"], \"weaknesses\": [\"The evaluations in the paper mainly focus on prediction accuracies. However, in many scenarios, such as virtual screening, computational cost is also very important. This is especially relevant in comparing methods using different modalities, while the paper completely ignores this aspect.\", \"The paper's presentation quality falls short of expectations. While most contents are still comprehensible, many minor issues exist in the current version of the paper. For example, in Figure 2(b), the word wrapping of \\u201cpreproc/essing\\u201d in is strange; also, what is \\u201ccombing\\u201d? For another example, the paper does not provide good definitions for image, geometry image and video in the MRL context. Minor issues like these truly affect the understanding of the paper.\", \"The findings in the experiments are interesting, but many of them are potentially superficial. They are often observations of the performance numbers, but fail to develop into more insightful discussions. For example, in section 5.3 \\u201cFine-tuning of MBANet\\u201d, the paper mentions that models using the video modality significantly outperform those using other modalities. But *why* is that? The *findings* of this paper would be much more interesting if they can take one step further to develop into *insights*.\", \"The design of the benchmarks seems questionable. The dataset contains only 10,000 molecules, which is a small size considering the vast chemical space. In this case, the video modality seems to be advantageous because the video models can see more frames of the molecules. For fairer comparison, models using other modalities should be also able to access the same amount of molecular conformations.\"], \"questions\": [\"To summarize the findings of the paper, could you give a concise conclusion on which model/modality to choose in MRL-related tasks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear authors:\\n\\nThanks for the responses, some of my concerns have been resolved. However, I still find the properties presented lack practical significance, and the images or videos do not provide additional insights into the molecule itself. I will make a final decision after further discussion with the other reviewers.\"}",
"{\"summary\": \"The authors describe BenchMol, a benchmark for discriminative tasks in the molecular context with a focus on comparing methods for molecular representation learning.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors point out that benchmarking in molecular representation learning is littered with problems, such as unfair comparisons of methods arising from difference evaluation strategies (e.g. splitting differences) and the absence of a convenient unified benchmarking platform.\\n\\nThe authors perform a novel task in predicting basic attributes of molecules given their molecular representation of choice. It is surprising that this apparently simple task results in many failures, perhaps this aspect could be made a focus of the paper in the context of different molecular representations.\", \"weaknesses\": \"The authors have benchmarked methods for molecular representation, however true multi-modality comes from the underlying modality of the data, such as binary labels vs continuous labels vs 3D data vs dynamics data, and as such this benchmark is not a benchmark for multi-modal models in the way one would expect - the models themselves are single modality.\\n\\nSince this benchmark is not evaluating multi-modal molecular algorithms, there is no specific need addressed by this new benchmark that isn't already serviced by existing molecular benchmarks, e.g. QM9, MoleculeNet, OGB, etc.\", \"questions\": \"1. The behaviour of different molecular representations is an important question, despite not being related to multi-modality. Would the authors consider an angle that examined the performance on the toy task described across molecular representations. Failures of representations to perform simple operations would be highly impactful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1IwoEFyErz | Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models | [
"Wenda Li",
"Huijie Zhang",
"Qing Qu"
] | The widespread use of AI-generated content from diffusion models has raised significant concerns regarding misinformation and copyright infringement. Watermarking is a crucial technique for identifying these AI-generated images and preventing their misuse. In this paper, we introduce *Shallow Diffuse*, a new watermarking technique that embeds robust and invisible watermarks into diffusion model outputs. Unlike existing approaches that integrate watermarking throughout the entire diffusion sampling process, *Shallow Diffuse* decouples these steps by leveraging the presence of a low-dimensional subspace in the image generation process. This method ensures that a substantial portion of the watermark lies in the null space of this subspace, effectively separating it from the image generation process. Our theoretical and empirical analyses show that this decoupling strategy greatly enhances the consistency of data generation and the detectability of the watermark. Extensive experiments further validate that our *Shallow Diffuse* outperforms existing watermarking methods in terms of robustness and consistency. | [
"diffusion Model",
"watermark",
"low-dimensional subspace",
"consistency",
"robustness"
] | Reject | https://openreview.net/pdf?id=1IwoEFyErz | https://openreview.net/forum?id=1IwoEFyErz | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wEShDsDIJp",
"vZafEQQebE",
"v0j1VEnymZ",
"rnme5uwhgV",
"r9mPC84bhP",
"r9fKtcYvqc",
"r3GUbBDc5i",
"pX3A4mpIIr",
"oFuJ0fFUUC",
"o0UZfmtQSu",
"iPSlKmQFUm",
"fanSPZzMUh",
"dklhIf6Iql",
"W3DVbG9wbQ",
"TxYBjDnvOf",
"OpfZbXdsny",
"MzXeOs33ge",
"JIwq2WKsW3",
"H0omFjcOUM",
"E7x0QUpuNC",
"AAxB7epBa1",
"9y2zenaBrj"
],
"note_type": [
"official_comment",
"decision",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732398085811,
1737523529446,
1730626445280,
1732634436083,
1730275824275,
1732465248543,
1732456525636,
1732512621821,
1732282619494,
1732282598663,
1732282533091,
1732471974073,
1732528647386,
1732561832641,
1732469471599,
1732562774876,
1730319947541,
1734398269727,
1732646580103,
1730613074324,
1732282372677,
1732282638323
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2755/Area_Chair_NENW"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_WRZj"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_a7Ds"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_VSmJ"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_VSmJ"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_WRZj"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_kjYg"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_a7Ds"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_a7Ds"
],
[
"ICLR.cc/2025/Conference/Submission2755/Area_Chair_NENW"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Reviewer_kjYg"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2755/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Reviewers,\\nThe authors have responded to your valuable comments.\\nPlease take a look at them!\\n\\nBest,\\nAC\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper proposed Shallow Diffuse, a watermarking technique for diffusion models. The method is well-motivated and with proper theoretical justification. The proposed Shallow Diffuse has several key advantages compared to existing diffusion watermarks, 1) It is a training-free watermark but simultaneously maintains the consistency between watermarked and original images. 2) It is more robust than existing baselines, achieving nearly no performance drop under different robustness tests. Shallow Diffuse also considers two scenarios including the server (protect generated image) and user (protect existing image) scenarios for injecting the watermark.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Shallow Diffuse's primary strength lies in its utilization of the low-rank property of the PMP's Jacobian matrix to minimize the visual impact of watermarks, thereby attaining visual consistency within a training-free watermark framework.\\n2. The injected watermark is more robust against several image distortions than existing baselines.\", \"weaknesses\": \"1. The presentation of this paper is poor, for instance, the ablation studies (Appendix C) and index of experimental results (Table 4) are incomplete. Therefore, this leads to a shortage of critical ablation studies.\\n2. What is the performance of multi-key identification, specifically, is it possible for the Shallow Diffuse to inject multiple watermarks and distinguish between them?\\n3. The image distortions are less than that in previous studies, such as Tree-Ring, where they apply 6 distortions.\\n4. Can DiffPure purify the watermarked patterns?\\n5. The findings in Table 4 are confusing. It appears that employing channel averaging enhances robustness against image distortions. However, channel averaging involves averaging clean and watermarked images across specific channels. As per my understanding, this process might reduce watermark robustness. Can you explain this observation?\", \"questions\": \"Please see the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the new experiments and the effort you have put into the rebuttal. I will raise my score to 6. However, I still believe that the weakness in handling multi-user scenarios is a critical limitation for a watermarking method whose primary focus is on user scenarios.\"}",
"{\"summary\": \"Current watermarking techniques based on diffusion models often embed watermarks directly into the initial noise\\uff0cwhich can alter the data distribution. This paper proposes \\\"Shallow Diffuse,\\\" a method that disentangles the watermark embedding from the generation process by leveraging low-dimensional subspaces. This approach supports watermark embedding for both server-side and user-side applications while maintaining high robustness and consistency. Additionally, experiments were designed to validate robustness and conduct ablation studies across multiple datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The method utilizes the local linearity of low-dimensional subspaces. As a watermarking method based on diffusion models, it maintains the consistency of generated images.\\n\\n2. This paper provides rigorous theoretical proof and presents a substantial number of computational formulas.\", \"weaknesses\": \"1. The attack experiments are limited, consisting of only four fixed-parameter attacks, which do not demonstrate the method's robustness. For instance, the method can be viewed as a variant of treering, could experiments with additional attacks, such as rotation\\u3001regeneration be included?\\n\\n2. The theoretical assumptions of the method are built upon [1], but the experimental results yield a different range of t values compared to the theoretical analysis in [1]. Although this can be explained by the errors introduced by DDIM-Inv, it remains perplexing.\\n\\n3. The method relies on the properties of DDIM and DDIM-inverse, which may lack certain generalizability. It might not perform well for attacks executed in the latent space.\", \"questions\": \"1. Shouldn't the formula $\\\\hat{\\\\boldsymbol{x}}_{0, t}:=\\\\boldsymbol{f}_{\\\\boldsymbol{\\\\theta}, t}\\\\left(\\\\boldsymbol{x}_{t}+\\\\lambda \\\\Delta \\\\boldsymbol{x}\\\\right)$ on line 360 be $\\\\hat{\\\\boldsymbol{x}}_{0, t}:=\\\\boldsymbol{f}_{\\\\boldsymbol{\\\\theta}, t}\\\\left(\\\\boldsymbol{x}_{t}\\\\right)$\\n\\n2. Could you specify the parameters used in the Shallow Diffuse method in Section 5, such as the embedding channels and watermark radius?\\n\\n3. The experiments in Appendix C only provide results, could you include some analysis?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for taking the time to review our rebuttal thoroughly. We sincerely appreciate your time and effort in improving our project.\"}",
"{\"comment\": \"I'd like to thank the authors for their detailed response. It has effectively addressed my concerns, particularly regarding the adversarial attacks and the ablation studies on different sampling methods. As a result, I have adjusted my score accordingly.\"}",
"{\"comment\": \"Thank you for carefully reviewing our rebuttal. We truly appreciate your time and effort in helping us make our project better.\"}",
"{\"comment\": \">**Q1:** \\\"Besides Tree-Ring and Stable Signature, I think there are more existing watermarking methods that introduce small perturbation to the image to embed watermark, like StegaStamp. To show the proposed method actually preserves image quality, I think the authors should compare the method with some watermarking methods that watermark the image after image generation with a small perturbation.\\\"\\n\\n**A1:** We have included comparisons with StegaStamp in both user and server scenarios, as detailed in Tables 1 - 4. In summary, the generation consistency of Shallow Diffuse is comparable to that of StegaStamp, as shown in Table 2. However, Shallow Diffuse demonstrates significantly better robustness, particularly against diffusion-based attacks such as DiffPure and IR, as highlighted in Tables 1 - 4.\\n\\n---\\n\\n>**Q2:** \\\"In the robustness part, I think the regeneration attacks and some adversarial perturbation should be evaluated on the proposed method to see whether the proposed method is actually robust under various attacks.\\\"\\n\\n**A2:** We have added 7 more adversarial attacks, please see Q1 in the global response.\\n\\n---\\n\\n>**Q3:** \\\"Since the authors mention the user scenario, if multiple users rewatermark the same image with the proposed method, can the watermark embeded by the specific user be detected in this circumstance?\\\"\\n\\n**A3:** We have added experiments for multi-key identification. please see Q2 in the response to all reviewers and ACs.\\n\\n>**Q4:** \\\"Compared to Tree-Ring, the technical contribution of the proposed method is limited.\\\"\\n\\n**A4:** Although our method is developed based on the Tree-Ring framework, there are several significant contributions over Tree-Ring that we want to highlight below:\\n\\n1. **Identifying and addressing fundamental limits of Tree-Ring.** Our study revealed that the Jacobian of the posterior mean estimator is full-rank at high-noise timesteps, leading to inherent image distortion when injecting watermarks using Tree-Ring. Our work addressed this limitation. We inject the watermark at shallow timesteps, where the Jacobian is low-rank with a large null space. This ensures most of the watermark's energy resides in the null space, minimizing distortion in image generation.\\n \\n2. **Improved Watermarking Techniques.** In Section 3.2, we inject the watermark into the low-frequency region, whereas Tree-Ring uses the high-frequency region. Additionally, in Appendix B, we introduce the concept of channel averaging. With these technical improvements, Shallow Diffuse achieves enhanced robustness and consistency.\\n \\n\\n3. **Theoretical justifications.** Tree-Ring is purely an empirical method lacking theoretical justification. In contrast, our approach is backed by rigorous theoretical guarantees of detectability and robustness under appropriate assumptions. This foundation enhances the interpretability and trustworthiness of our method.\\n\\n[1] Chen, Siyi, Huijie Zhang, Minzhe Guo, Yifu Lu, Peng Wang, and Qing Qu. \\\"Exploring low-dimensional subspaces in diffusion models for controllable image editing.\\\" arXiv preprint arXiv:2409.02374 (2024).\"}",
"{\"comment\": \"> **Q1:** \\\"The table in the paper is not very well drawn, it is very difficult to read, especially the header. At the same time, the experimental part is not detailed enough. For example, should the comparison method reproduce the results or use the pre-training model?\\\"\\n\\n**A1:** Thank you for your feedback. We have improved the presentation, as outlined in Q3 of our global response. Regarding the comparison setup, we highlight this in lines 401\\u2013405. In summary, for the server scenario, all diffusion-based methods are evaluated using the same model, Stable Diffusion 2.1. Non-diffusion methods are applied to images generated by Stable Diffusion 2.1. We control the initial seeds so that the non-diffusion methods use the same set of images as the diffusion-based methods.\\n\\n---\\n>**Q2:** \\\"In Table 1, for the CLIP-Score index, yours is 0.3285, which seems to be the worst. Please explain further.\\\"\\n\\n**A2:** The CLIP score focuses solely on image quality. For watermarking, however, our priority is the consistency between the watermarked and original images, which the CLIP score fails to capture.\\n\\nTable 1 shows that the CLIP score and FID achieved by Shallow Diffuse are closest to those of Stable Diffusion without watermarking (see the first and last rows of Table 1). This suggests that images generated by Shallow Diffuse maintain greater consistency with those of the original Stable Diffusion model. In contrast, methods like Tree-Ring Watermarks, while achieving higher CLIP scores, significantly distort the images, which is undesirable. Figure 1 further illustrates this, showing how Tree-Ring Watermarks introduce a bias toward the inserted key.\\n\\n---\\n>**Q3:** \\\"Please explain why the filter size in the Gaussian blurring is 8 \\u00d7 8 and how the standard deviation is selected.\\\"\\n\\n**A3:** We apply the same experiment setting as our baseline method Tree-Ring and RingID.\\n\\n---\\n>**Q4:** \\\"As can be seen from Table 2, the PSNR and SSIM of most methods are very low, so it is easy for human eyes to find modification traces, which easily leads to the risk of watermarked images being maliciously broken. Please further explain the visual quality of the generated watermarked image.\\\"\\n\\n**A4:** We have included additional qualitative comparisons with non-diffusion-based methods in Figure 8. From the results, it is challenging to visually distinguish our method from these non-diffusion-based approaches and clean images. Furthermore, as demonstrated in Figure 1, our method achieves significantly greater visual consistency with the original image compared to Tree-Ring and RingID.\"}",
"{\"comment\": \"> **Q1:** The presentation of this paper is poor, for instance, the ablation studies (Appendix C) and index of experimental results (Table 4) are incomplete. Therefore, this leads to a shortage of critical ablation studies.\\n\\n**A1:** We have improved the presentation in the revised paper. Specifically, we have included discussions on each ablation study in Appendix C, and improved the readability of the tables as suggested. See Q3 of our global response.\\n\\n---\\n\\n> **Q2:** What is the performance of multi-key identification, specifically, is it possible for the Shallow Diffuse to inject multiple watermarks and distinguish between them?\\n\\n**A2:** We have added experiments for multi-key watermarking. Please see Q2 of our global response.\\n\\n---\\n\\n> **Q3:** \\n> \\\"The image distortions are less than that in previous studies, such as Tree-Ring, where they apply 6 distortions\\\". \\n> \\\"Can DiffPure purify the watermarked patterns?\\\"\\n\\n**A3:** We have added 7 more adversarial attacks, please see Q1 in our global response to all reviewers. Specifically, we have chosen DiffPure [1] as a specific attack. Shallow Diffuse achieves 1.00 TPR@1%FPR at the server scenario and 0.86 (COCO), 0.9 (DiffusionDB), 1.0 (WikiArt) TPR@1%FPR at the user scenario. Thus, DiffPure is hard to purify the watermarked patterns from Shallow Diffuse.\\n\\n---\\n\\n> **Q4:** \\\"The findings in Table 4 are confusing. It appears that employing channel averaging enhances robustness against image distortions. However, channel averaging involves averaging clean and watermarked images across specific channels. As far as I understand, this process might reduce the robustness of the watermark. Can you explain this observation?\\\"\\n\\n**A4:** We do not directly average the clean images and watermarked images in our approach. Instead, we embed the watermark into a single channel while averaging the non-watermarked channels. This design leverages the observation that many image processing operations, such as color jittering or Gaussian blurring, tend to affect all channels uniformly. By isolating the watermark in a single channel, it becomes less susceptible to these transformations. Consequently, channel averaging improves robustness against certain attacks.\\n\\n[1] Nie, Weili, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. \\\"Diffusion models for adversarial purification.\\\" arXiv preprint arXiv:2205.07460 (2022).\"}",
"{\"comment\": \"Thanks for your response, and now this paper seems more complete than the previous version.\"}",
"{\"comment\": \"The author's reply can basically answer my concerns, based on this, I have adjusted the corresponding score.\"}",
"{\"comment\": \"Thank you for thoroughly reviewing our rebuttal. We sincerely appreciate your time and effort in helping us improve our project.\"}",
"{\"comment\": \"Thank you for providing the additional experiments. For Questions 2 and 4, I believe my concerns have been addressed. However, for Questions 1 and 3, I find that my concerns remain unresolved.\", \"q1\": \"Based on the new Table 1, it is evident that the generation quality of StageStamp significantly outperforms the proposed method, as indicated by the CLIP score (0.355 vs. 0.328). This raises a critical question: does the improved robustness of the proposed method come solely as a trade-off for lower generation quality?\", \"q3\": \"In the new experiment, the authors stated that they utilized non-overlapping masks for multi-user rewatermarking and assumed the ability to predefine the number of keys and non-overlapping masks. However, this evaluation is limited to the scenario of two users. In real-world applications, as the number of users increases, how significantly would the identification accuracy and robustness of the proposed method degrade? Furthermore, if the number of users surpasses the predefined limit, does this imply that no additional users can be integrated into the watermarking system?\"}",
"{\"comment\": \">**Q1:** Based on the new Table 1, it is evident that the generation quality of StageStamp significantly outperforms the proposed method, as indicated by the CLIP score (0.355 vs. 0.328). This raises a critical question: does the improved robustness of the proposed method come solely as a trade-off for lower generation quality?\\n\\n**A1:** Thank you for the insightful comments. We address two aspects of your question as follows.\\n\\n1. CLIP score is not a good indicator of generation consistency. For watermarking applications, our primary objective is to ensure consistency between watermarked and original images rather than generation quality\\u2014a property that the CLIP score does not effectively capture. Figure 9 illustrates this by comparing original Stable Diffusion images with watermarked versions generated by StageStamp and Shallow Diffuse. While StageStamp introduces noticeable visual artifacts, Shallow Diffuse produces cleaner, more visually consistent outputs. This discrepancy between visual quality and the CLIP score highlights the limitations of the CLIP metric, which is inherently biased due to its sensitivity to text embeddings [1].\\n \\n2. Our method improves both robustness and generation consistency over StageStamp. Furthermore, Table 1 demonstrates that Shallow Diffuse achieves CLIP scores and FID values closer to those of the original Stable Diffusion (first and last rows of Table 1). This alignment implies that Shallow Diffuse better preserves the original image characteristics compared to StageStamp, which, despite achieving higher CLIP scores, introduces undesirable distortions for watermarking applications. Additionally, as demonstrated in Table 2, metrics better suited for assessing generation consistency\\u2014such as LPIPS, SSIM, and PSNR\\u2014indicate that Shallow Diffuse performs comparably to StageStamp.\\n \\n\\nTo fully evaluate whether there exists a trade-off between these factors, we also conducted additional experiments comparing our approach with existing methods. As shown in Figure 4, under nearly identical robustness conditions, Shallow Diffuse outperforms others in terms of generation consistency. This demonstrates that our method achieves simultaneous improvements in both robustness and consistency, without compromising one for the other.\\n\\n[1] Ahmadi, Saba, and Aishwarya Agrawal. \\\"An examination of the robustness of reference-free image captioning evaluation metrics.\\\" Findings of the Association for Computational Linguistics: EACL 2024, pages 196\\u2013208.\\n\\n>**Q3:** In the new experiment, the authors stated that they utilized non-overlapping masks for multi-user rewatermarking and assumed the ability to predefine the number of keys and non-overlapping masks. However, this evaluation is limited to the scenario of two users. In real-world applications, as the number of users increases, how significantly would the identification accuracy and robustness of the proposed method degrade? Furthermore, if the number of users surpasses the predefined limit, does this imply that no additional users can be integrated into the watermarking system?\\n\\n**A3:** Thank you for the thoughtful question. We extended our experiments to include 4, 8, 16, and 32 users and compared the results with Tree-Ring. The results are presented in Table 6, and we\\u2019ve summarized the table below.\\n\\nShallow Diffuse consistently outperformed Tree-Ring in robustness across different numbers of users. Even as the number of users increased to 32, Shallow Diffuse maintained strong robustness under clean conditions. However, in adversarial settings, its robustness began to decline when the number of users exceeded 16. Under the current setup, when the number of users surpasses the predefined limit, our method becomes less robust and accurate.\\n\\nWe believe that enabling watermarking for hundreds or even thousands of users simultaneously is a challenging yet promising future direction for Shallow Diffuse.\\n\\n```markdown\\n| Watermark number | Method | Clean | Adversarial average |\\n|------------------|-----------------|-----------|:-------------------:|\\n| 2 | Tree-Ring | 1.00/1.00 | 0.98/0.80 |\\n| 2 | Shallow Diffuse | 1.00/1.00 | 0.99/0.95 |\\n| 4 | Tree-Ring | 1.00/1.00 | 0.96/0.70 |\\n| 4 | Shallow Diffuse | 1.00/1.00 | 0.99/0.86 |\\n| 8 | Tree-Ring | 1.00/0.95 | 0.91/0.47 |\\n| 8 | Shallow Diffuse | 1.00/1.00 | 0.98/0.80 |\\n| 16 | Tree-Ring | 0.96/0.57 | 0.83/0.26 |\\n| 16 | Shallow Diffuse | 1.00/0.89 | 0.92/0.56 |\\n| 32 | Tree-Ring | 0.95/0.44 | 0.80/0.16 |\\n| 32 | Shallow Diffuse | 0.99/0.89 | 0.90/0.44 |\\n```\"}",
"{\"summary\": \"The paper proposed a new image watermarking method that embeds a watermark into the null space of a specific step during image denoising in diffusion model. It shows that the proposed watermarking method have smaller impact on the generated images compared to the existing methods like Stable Signature and Tree-Ring. Additionally, the proposed method shows good robustness to image processing methods like JPEG and Gaussian blur.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method has smaller impact on the generated images compared to the existing watermarking methods designed for diffusion models.\\n2. Experiments are carried on several image-prompt datasets to show the effectiveness of the proposed methods.\\n3. The robustness of the propsoed method is evaluated.\", \"weaknesses\": \"1. Compared to Tree-Ring, the technical contribution of the proposed method is limited.\\n2. In the experimental part, the authors mainly compare their method with the watermarking methods that embed watermark into the semantic space like Tree-Ring which changes the image a lot. More other watermarking methods should be evaluated.\\n3. In the robustness part, the authors only evaluate the robustness of the proposed method on some common perturbation.\", \"questions\": \"1. Besides Tree-Ring and Stable Signature, I think there are more existing watermarking methods that introduce small perturbation to the image to embed watermark, like StegaStamp. To show the proposed method actually preserves image quality, I think the authors should compare the method with some watermarking methods that watermark the image after image generation with a small perturbation.\\n2. In the robustness part, I think the regeneration attacks and some adversarial perturbation should be evaluated on the proposed method to see whether the proposed method is actually robust under various attacks.\\n3. Since the authors mention the user scenario, if multiple users rewatermark the same image with the proposed method, can the watermark embeded by the specific user be detected in this circumstance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper studies diffusion-based digital watermarking to deal with the AI-generated content tracing problem. The proposed approach, ``Shallow Diffuse,'' tries to decouple both the watermarking and diffusion processes by leveraging the presence of a low-dimensional subspace in the image generation process. Both theoretical and empirical analyses were presented.\\n\\nThe authors claimed ``Consistency and Robustness: Extensive experiments demonstrate that Shallow Diffuse consistently outperforms other diffusion-based methods in terms of both robustness and reproducibility.'' in their response. Most reviewers are satisfied by the responses.\\nHowever, by checking Table 4, Shallow Diffuse presents robustness inferior to RingID under a vert limited set of selected attacks.\\nFor multiple key identification presented in Table 5, ShallowDiffuse performs inferior to RingID.\\nIn term of these watermarking requirements, it is hard to say that ShallowDiffuse advances the development of diffusion-based watermarking.\\nTo surpass and to identify sufficient differences from the prior works, it is encouraged to conduct robustness evaluations under a broad range of attacks/distortions.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers are satisfied with the authors' response, except that Reviewer a7Ds ``still believe that the weakness in handling multi-user scenarios is a critical limitation for a watermarking method whose primary focus is on user scenarios.'' As an AC that also has much experiences in digital watermarking, this work needs to conduct a comprehensive robustness evaluation from a broad range of attacks, in particular including geometric distortions.\"}",
"{\"comment\": \"We sincerely thank you for your thoughtful feedback and for raising our rating. Multi-user scenarios are indeed a crucial area of research, and we are willing to explore this approach in future works to improve our framework further.\"}",
"{\"summary\": \"This paper proposes a watermarking technique Shallow Diffuse. Unlike existing approaches that integrate watermarking throughout the entire diffusion sampling process, Shallow Diffuse decouples these steps by leveraging the presence of a low-dimensional subspace in the image generation process. This method ensures that a substantial portion of the watermark lies in the null space of this subspace, effectively separating it from the image generation process.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The originality of this paper is not great, but its quality, clarity and significance are good. It has the support of rich theoretical basis and has advantages in theoretical proof.\", \"weaknesses\": \"The table in the paper is not very well drawn, it is very difficult to read, especially the header. At the same time, the experimental part is not detailed enough. For example, should the comparison method reproduce the results or use the pre-training model?\", \"questions\": \"1. The table in the paper is very difficult to read clearly.\\n\\n2. In Table 1, for the CLIP-Score index, yours is 0.3285, which seems to be the worst. Please explain further.\\n\\n3. Please explain why the filter size in the Gaussian blurring is 8 \\u00d7 8 and how the standard deviation is selected.\\n\\n4. As can be seen from Table 2, the PSNR and SSIM of most methods are very low, so it is easy for human eyes to find modification traces, which easily leads to the risk of watermarked images being maliciously broken. Please further explain the visual quality of the generated watermarked image.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To all reviewers and ACs\", \"comment\": \"We sincerely thank all reviewers for their thorough evaluation of our work and for providing valuable and constructive feedback. We are encouraged by the positive remarks, including that our method is \\u201cwell-motivated,\\u201d \\u201csignificant,\\u201d and \\u201ceffective\\u201d (WRZj, kjYg, a7Ds), as well as the recognition of our theoretical analysis as \\u201cproper,\\u201d \\u201crich,\\u201d and \\u201crigorous\\u201d (WRZj, kjYg, VSmJ).\\n\\n**Summary of Our Contributions**:\\n\\nIn this work, we introduce Shallow Diffuse, a simple yet effective watermarking method that leverages the low-dimensional space inherent in the diffusion model generation process. By decoupling the sampling and watermarking steps, our approach achieves several notable advantages:\\n\\n1. **Flexibility**: To the best of our knowledge, Shallow Diffuse is the first training-free, diffusion-based watermarking method that can be efficiently applied in both user-side and server-side scenarios.\\n \\n2. **Consistency and Robustness**: Extensive experiments demonstrate that Shallow Diffuse consistently outperforms other diffusion-based methods in terms of both robustness and reproducibility.\\n \\n3. **Theoretical Foundations**: Unlike prior methods, our work provides theoretical bounds for both consistency and detectability, offering a solid foundation for the effectiveness of our approach.\\n \\n\\n **Addressing reviewers\\u2019 major concerns**. We thank the reviewers for their feedback on our presentation and suggested experiments. In response, we have addressed key concerns, including adversarial attack evaluations, multi-key watermarking experiments, and presentation improvements, as detailed below. Reviewer-specific questions have also been addressed individually, with all changes highlighted in red in the revised paper.\\n___\\n> **Q1**: Additional adversarial attack evaluations\\n\\n**A1**:\\nWe have incorporated 7 more attack methods, including resize and restore, random drop, medium blurring, diffusion purification [1], VAE-based image compression models [2, 3], and stable diffusion-based image regeneration [4]. Detailed settings for these attacks are provided in Appendix C.1, while the experimental results are summarized in Tables 1\\u20134. After taking these attacks into account, Shallow Diffuse is still one of the most robust methods in both the user and the server scenario.\\n\\n___\\n> **Q2**: Experiments on multi-key watermarking.\\n\\n**A2**:\", \"we_have_designed_two_tasks_for_evaluating_multi_key_watermarking\": \"multi-key identification and multi-key re-watermarking.\\n- **Multi-key identification:** This classification task tests the ability to identify individual watermarks among $N=2048$ keys, each with a distinct ring-shaped key $W_i$ for $i = 1, ..., N$. A random key is embedded into images, and after attacks, the task is to detect if the correct key is identified. The success rate serves as the evaluation metric. Results in Table 5 show that Shallow Diffuse outperforms Tree-Ring despite lacking a multi-key-specific design, while RingID achieves the highest success rate because it is specifically designed for multi-key identification. Future exploration of multi-key identification strategies is promising.\\n \\n- **Multi-key re-watermarking:** This task evaluates embedding and detecting multiple watermarks (tested with two) in the same image. Metrics include the average AUC and TPR@1%FPR over all watermarks. Results in Table 6 demonstrate Shallow Diffuse\\u2019s ability to handle multi-key re-watermarking problems, achieving 1.00 for most of the metrics.\\n\\nDetails of the experiment can be found in section C.2.\\n\\n___\\n> **Q3**: Improved presentation in tables and appendix.\\n\\n**A3**:\\n\\nWe have re-designed Tables 1, and 2 (layouts and captions) and split the detailed adversarial attack experiments in Tables 3, and 4. We have also added discussions about each ablation study in Appendix C. \\n\\n[1] Nie, Weili, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. \\\"Diffusion models for adversarial purification.\\\" International Conference on Machine Learning (ICML 2022).\\n\\n[2] Ball\\u00e9, Johannes, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. \\\"Variational image compression with a scale hyperprior.\\\" International Conference on Learning Representations (ICLR 2018).\\n\\n[3] Cheng, Zhengxue, Heming Sun, Masaru Takeuchi, and Jiro Katto. \\\"Learned image compression with discretized gaussian mixture likelihoods and attention modules.\\\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7939-7948. 2020.\\n\\n[4] Zhao, Xuandong, Kexun Zhang, Yu-Xiang Wang, and Lei Li. \\\"Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats.\\\" (2023).\"}",
"{\"comment\": \"> **Q1:** \\\"The attack experiments are limited, consisting of only four fixed-parameter attacks, which do not demonstrate the method's robustness. For instance, the method can be viewed as a variant of treering, could experiments with additional attacks, such as rotation\\u3001regeneration be included?\\\"\\n\\n**A1:** We have added 7 additional adversarial attacks, please see Q1 in the global response.\\n\\n> **Q2:** \\\"The theoretical assumptions of the method are built upon [1], but the experimental results yield a different range of t values compared to the theoretical analysis in [1]. Although this can be explained by the errors introduced by DDIM-Inv, it remains perplexing.\\\"\\n\\n**A2:** There is another important factor contributing to this gap. The previous results in LOCO-Edit [2] only evaluate rank in the image space of unconditional diffusion models. However, most of our experiments are conducted on latent diffusion models such as Stable Diffusion. For these models, the minimum rank may be achieved at a different timestep compared to vanilla diffusion models. However, we couldn\\u2019t reproduce the experiment for Stable Diffusion because calculating the Jacobian matrix is computationally infeasible due to its huge size (3 * 256 * 256 x 3 * 256 * 256).\\n\\nIn the revision, we have included a discussion this factor in the experimental analysis.\\n\\n>**Q3:** \\\"The method relies on the properties of DDIM and DDIM-inverse, which may lack certain generalizability. It might not perform well for attacks executed in the latent space.\\\"\\n\\nWe believe there might be some misunderstandings of our result. In our experiments, we do apply our Shallow Diffuse in the latent space for Stable Diffusion, see line 402. Additionally, we have added ablation studies on different sampling methods, including DDIM, DEIS [3], DPM-Solver [4], PNDM [5], and UniPC [6]. See Appendix C.6 for more details. In short, all these samplers have very similar image generation quality and robustness, demonstrating the generalizability of our approach.\\n\\n>**Q4:** Typos in the equation.\\n\\n**A4:** We have fixed and highlighted it in the revised manuscript.\\n\\n>**Q5:** Could you specify the parameters used in the Shallow Diffuse method in Section 5, such as the embedding channels and watermark radius?\\n\\n**A5:** We have added and highlighted it in the revised manuscript in section C.5 and line 269.\\n\\n>**Q6:** The experiments in Appendix C only provide results, could you include some analysis?\\n\\n**A6:** We have added discussions and highlighted them in Appendix C in the revised manuscript.\\n\\n[2] Chen, Siyi, Huijie Zhang, Minzhe Guo, Yifu Lu, Peng Wang, and Qing Qu. \\\"Exploring low-dimensional subspaces in diffusion models for controllable image editing.\\\" NeurIPS 2024\\n\\n[3] Zhang, Qinsheng, and Yongxin Chen. \\\"Fast sampling of diffusion models with exponential integrator.\\\" ICLR 2023.\\n\\n[4] Lu, Cheng, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. \\\"Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps.\\\" _Advances in Neural Information Processing Systems_ 35 (2022): 5775-5787.\\n\\n[5] Liu, Luping, Yi Ren, Zhijie Lin, and Zhou Zhao. \\\"Pseudo numerical methods for diffusion models on manifolds.\\\" ICLR 2022.\\n\\n[6] Zhao, Wenliang, Lujia Bai, Yongming Rao, Jie Zhou, and Jiwen Lu. \\\"Unipc: A unified predictor-corrector framework for fast sampling of diffusion models.\\\" _Advances in Neural Information Processing Systems_ 36 (2024).\"}"
]
} |
1IuwdOI4Zb | Animate-X: Universal Character Image Animation with Enhanced Motion Representation | [
"Shuai Tan",
"Biao Gong",
"Xiang Wang",
"Shiwei Zhang",
"DanDan Zheng",
"Ruobing Zheng",
"Kecheng Zheng",
"Jingdong Chen",
"Ming Yang"
] | Character image animation, which generates high-quality videos from a reference image and target pose sequence, has seen significant progress in recent years. However, most existing methods only apply to human figures, which usually do not generalize well on anthropomorphic characters commonly used in industries like gaming and entertainment. Our in-depth analysis suggests to attribute this limitation to their insufficient modeling of motion, which is unable to comprehend the movement pattern of the driving video, thus imposing a pose sequence rigidly onto the target character. To this end, this paper proposes $\texttt{Animate-X}$, a universal animation framework based on LDM for various character types (collectively named $\texttt{X}$), including anthropomorphic characters. To enhance motion representation, we introduce the Pose Indicator, which captures comprehensive motion pattern from the driving video through both implicit and explicit manner. The former leverages CLIP visual features of a driving video to extract its gist of motion, like the overall movement pattern and temporal relations among motions, while the latter strengthens the generalization of LDM by simulating possible inputs in advance that may arise during inference. Moreover, we introduce a new Animated Anthropomorphic Benchmark ($\texttt{$A^2$Bench}$) to evaluate the performance of $\texttt{Animate-X}$ on universal and widely applicable animation images. Extensive experiments demonstrate the superiority and effectiveness of $\texttt{Animate-X}$ compared to state-of-the-art methods. | [
"Animation",
"Anthropomorphic",
"Video Generation",
"Pose"
] | Accept (Poster) | https://openreview.net/pdf?id=1IuwdOI4Zb | https://openreview.net/forum?id=1IuwdOI4Zb | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zJHhjdh8KL",
"z7OyUzDsoG",
"vtvARvs95V",
"vXa5aqYCrj",
"ulnlCGXswD",
"t3KqOIVP6J",
"t0L34YQ3uE",
"pQl1ffz15F",
"kgpWzMleHN",
"k6feaopYus",
"jEGJSXbhdr",
"iyjAwYw0CZ",
"gnItpVM2RD",
"g9dqb0cNRb",
"fjFVppLIXP",
"fh6IJJWGGh",
"eKL6l2G2Ks",
"drcNMai9Lx",
"ahWMp9ArlC",
"ZOOVuHMxPv",
"XqjDvq0Yqd",
"WhABqUY73T",
"TB51g0cz3j",
"SClEUBMPLQ",
"QvbV9Ot2mw",
"QW1WCHvd28",
"PHaTjppS8O",
"Lnjz2Tgcw9",
"LK0Q0195Rw",
"KYeKs0kLoQ",
"EdSN94XBIO",
"DRSKbFCgXE",
"CwqryetGy4",
"BwjW0E6vRn",
"BiOvY92TTf",
"BR1d6EKbVe",
"8NZ82AppSw",
"6WFzxNQB04",
"6Fs2HgstSL",
"2yIgh9ytX2",
"2Ro8hUiKVr",
"1dD0eH5Xmr",
"1GnAEQ5TUM"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732090544327,
1732090879987,
1732510368657,
1732096751568,
1732510285675,
1732890164678,
1732853252544,
1730634038214,
1732093415125,
1733116638978,
1733133180221,
1732092638636,
1733132314588,
1732538548084,
1732793337549,
1732089602358,
1730628139094,
1730346800522,
1732096031478,
1737523374542,
1732095088291,
1732095376206,
1732802712669,
1730641747769,
1732809141874,
1732847050644,
1734884885906,
1733131592574,
1732097252665,
1732087609752,
1732856205691,
1732510345931,
1732094523696,
1732094325695,
1732708580989,
1732812500738,
1732712798151,
1732804964560,
1732802182426,
1732853808972,
1732510311471,
1732534591581,
1732096842973
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_feUz"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_mbHE"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_mbHE"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_mbHE"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_feUz"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_feUz"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_ESK8"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_feUz"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_aHUH"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Area_Chair_1r4d"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_ESK8"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_feUz"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_mbHE"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
],
[
"ICLR.cc/2025/Conference/Submission37/Reviewer_aHUH"
],
[
"ICLR.cc/2025/Conference/Submission37/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response (2)\", \"comment\": \"**Comment 2: Additionally, the benchmark lacks detailed information, such as video length and frame rate (Answer 2.1). Were any additional motion prompts used to generate videos from images (Answer 2.2)? If so, what is their diversity and complexity (Answer 2.3)?**\\n\\n**Answer 2.1.** Each video in A\\u00b2Bench is 5 seconds long, with a frame rate of 30 FPS and a resolution of 832 \\u00d7 1216.\\n\\n**Answer 2.2.** When generating videos from images, we supplement the prompt in Figure 10 (original submission) regarding spatial relationships, physical logic, and temporal consistency. Examples include: *\\\"reasonable movement\\\"*, *\\\"varied dance\\\"*, and *\\\"continuous dance\\\"*. These prompts further ensure strict logic and good spatial and temporal consistency.\\n\\n**Answer 2.3.** To guarantee diversity and complexity, for each prompt, we first generate 4 images using 4 different random seeds. Then, for each image, we generate 4 videos. This process ensures both diversity and complexity in the final results. Moreover, as suggested by **Reviewer #3 feUz**, we add style trigger words such as *\\\"Watercolor Painting\\\"*, *\\\"Cyberpunk Style\\\"*, *\\\"Van Gogh\\\"*, *\\\"Ukiyo-E\\\"*, *\\\"Pixel Art\\\"*, and so on. The results are presented in Figure 3 (response letter), which further enhances the diversity and complexity of A$^2$Bench.\\n\\n***\\n\\n**Comment 3: The necessity of a pose pool and the selection of an anchor pose image need clarification (Answer 3.3). What operations are involved in the ''align'' process (Answer 3.1), specifically regarding translation and rescaling (Answer 3.2)? Why not use random translation and rescaling instead of relying on an anchor pose image (Answer 3.3)?**\\n\\n**Answer 3.1.** As shown in the left half of Figure 8 (original submission) or Figure 4 (response letter), the operations in the ''align'' process are as follows:\\n- **Step1:** Given a driving pose $I^p$, we randomly select an anchor pose $I^p_{anchor}$ from the pose pool (two examples are shown in Figure 8.)\\n- **Step2:** We then calculate the proportion of each body part between these two poses. For example, the shoulder length of $I^p_{anchor}$ divided by the shoulder length of $I^p$ might be 0.45, and the leg length of $I^p_{anchor}$ divided by the leg length of $I^p$ might be 0.53, and so on.\\n- **Step3:** We multiply each body part of the driven pose (*i.e.*, $I^p$) by the corresponding ratio (*e.g.*, 0.45, 0.53, *etc.*) to obtain the aligned pose (*i.e.*, $I^p_n$).\\n\\n**Answer 3.2.** As shown in the right half of Figure 4 (response letter):\\n- **Step4:** (*\\\"rescaling\\\"*) Then we define a set of keypoint rescaling operations, including modifying the length of the body, legs, arms, neck, and shoulders, altering face size, adding or removing specific body parts, *etc.* These operations are stored in a rescale pool.\\n- **Step5:** (*\\\"translation\\\"*) We apply the selected rescaling operations on the aligned pose $I^p_{realign}$ to obtain the final transformed poses $I^p_n$.\\n\\n**Answer 3.3.** As shown in Figure 5 (response letter), the reason for *\\\"not using random translation and rescaling instead of relying on an anchor pose image\\\"* is that random translation and rescaling disrupt the motion guidance originally conveyed by the driven pose image. This issue makes the animation model miss the accurate driving guidance, which diminishes its ability to generate proper animations. In contrast, using anchor pose images maintain harmonious proportions for each body part and preserve the consistency of all motion details.\\n\\nTo prove this point, we **re-trained** our model using pose images obtained through **random** translation and rescaling. The results, presented in Figure 6 (response letter), indicate that the baseline achieves only a marginal improvement (*i.e.*, the content of the reference image only appears in the initial frames, while illogical human characteristics persist throughout). In contrast, our approach delivers satisfactory performance (*i.e.*, it perfectly preserves the cartoon ID of the reference image while adding dynamic motion).\\n\\nFinally, as shown in **Table II**, quantitative results of ablation study indicate that the \\\"realign\\\" operation plays a crucial role in improving performance, which justifies both the pose pool and the selection of an anchor pose for EPI alignment.\"}",
"{\"title\": \"Response (3)\", \"comment\": \"| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **L1** \\u2193 | **LPIPS** \\u2193 | **FID** \\u2193 | **FID-VID** \\u2193 | **FVD** \\u2193 |\\n|--------------------------|---------------|--------------|----------------|---------------|---------------|---------------|---------------|\\n| w/o Add in EPI | 13.28 | 0.442 | 1.56E-04 | 0.459 | 34.24 | 52.94 | 804.37 |\\n| w/o Drop in EPI | 13.36 | 0.441 | 1.94E-04 | 0.458 | *26.65* | 44.55 | 764.52 |\\n| w/o BS in EPI | 13.27 | 0.443 | 1.08E-04 | 0.461 | 29.60 | 56.56 | 850.17 |\\n| w/o NF in EPI | *13.41* | *0.446* | 1.82E-04 | 0.455 | 29.21 | 56.48 | 878.11 |\\n| w/o AL in EPI | 13.04 | 0.429 | *1.04E-04* | 0.474 | 27.17 | *33.97* | 765.69 |\\n| w/o Rescalings in EPI | 13.23 | 0.438 | 1.21E-04 | 0.464 | 27.64 | 35.95 | *721.11* |\\n| w/o Realign in EPI | 12.27 | 0.433 | 1.17E-04 | *0.434* | 34.60 | 49.33 | 860.25 |\\n| **with complete EPI** | **13.60** | **0.452** | **1.02E-04** | **0.430** | **26.11** | **32.23** | **703.87** |\\n\\n**Table II:** Quantitative results of the ablation study. The best and second-best results for each column are **bold** and *italicized*, respectively.\\n\\n***\\n\\n**Comment 4: The effectiveness of the Implicit Pose Indicator (IPI) is also in question. The motivation for the IPI is that sparse keypoints lack image-level details, while IPI aims to retrieve richer information. However, Table 7 and 8 indicate that Animate-X achieves comparable performance to Animate-Anyone and UniAnimate on human videos. This suggests that the IPI does not provide any benefits for human animation.**\\n\\nThe effectiveness of the Implicit Pose Indicator (IPI) have been demonstrated through the quantitative results in **Table III** (*i.e.*, Table 4 in the original submission) and the qualitative analysis in Figure 7 in the original submission.\\n\\n| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **L1** \\u2193 | **LPIPS** \\u2193 | **FID** \\u2193 | **FID-VID** \\u2193 | **FVD** \\u2193 |\\n|--------------------|---------------|----------------|------------------|-----------------|-----------------|----------------|-----------------|\\n| w/o IPI | 13.30 | 0.433 | 1.35E-04 | *0.454* | 32.56 | 64.31 | 893.31 |\\n| w/o LQ | *13.48* | 0.445 | 1.76E-04 | *0.454* | 28.24 | 42.74 | 754.37 |\\n| w/o DQ | 13.39 | 0.445 | **1.01E-04** | 0.456 | 30.33 | 62.34 | 913.33 |\\n| **Animate-X** | **13.60** | **0.452** | *1.02E-04* | **0.430** | **26.11** | **32.23** | **703.87** |\\n\\n**Table III:** Quantitative results of the ablation study on IPI. The best and second-best results for each column are **bold** and *italicized*, respectively.\\n\\n**1)** The primary purpose of Animate-X is to animate universal characters, especially anthropomorphic figures in cartoons and games. Human animation is **NOT** the primary focus of this work as it is a small subset of 'X'. Table 7 & 8 verify that even for human figures, Animate-X's performance is on par with the latest works focusing on animating human figures. This strongly indicates the generalization capability of Animate-X.\\n\\n**2)** IPI does retrieve richer information from driven video that is critical to some hard cases that lack of enough details in anthropomorphic figures, e.g., . It is reasonable that its contribution is marginal for those simple human-driven animations that the details are already sufficient to capture human motion, which are not the cases that IPI is designed to address. Therefore, for datasets like TikTok with exclusive human data only, we just want to show II also improves a bit and Animate-X is well backward compatible for human figures;\\n\\n**3)** Anthropomorphic characters are arguably more desirable in gaming film and short videos. Therefore we introduce a novel benchmark beyond human, as detailed in Section 3.4. We kindly suggest the reviewer to watch the MP4 videos in the updated supplementary materials.\"}",
"{\"comment\": \"Dear Reviewer ESK8,\\n\\nThank you again for the great efforts and valuable comments. We hope you find the response satisfactory. As the discussion phase is about to close, we are eagerly looking forward to hearing from you regarding any further feedback. We will be more than happy to address any additional concerns you may have.\\n\\nBest,\\n\\nAnimate-X Authors\"}",
"{\"title\": \"Thank you for your valuable feedbacks.\", \"comment\": \"We sincerely thank **Reviewer #4 ESK8** for acknowledging *\\\"the clear motivation, video results, and benchmark presented in our work\\\"*. We have re-uploaded our supplementary materials, which include the complete responses (at `.zip/Animate-X_rebuttal_response_letter.pdf`) along with the relevant figures and tables. The response letter is also contained in the main paper, after page 25. Below, we have addressed each question in detail and hope to clarify any concerns.\\n\\n**Comment 1: The backbone of this work remains unchanged, it is quite similar to the prior works like your reference AnimateAnyone and MagicAnimate, which makes this work a straightforward extension of existing works and thus reduces the contribution of this paper.**\\n\\nThanks for the comments. First of all, the primary contribution of this work is the introduction of the **universal** character image animation. We proposed Animate-X to addresses challenges by leveraging our proposed IPI and EPI modules to implicitly and explicitly model the universal pose indicator. Using the same backbone as AnimateAnyone and MagicAnimate, which have pioneered in latent diffusion models for human animation, allows us to have a fair comparison with these works and demonstrate the contribution of IPI and EPI to animate anthropomorphic figures.\\n\\n\\n***\\n\\n**Comment 2: Leveraging driving videos to boost the animation performance has been already explored in a few prior works like [1]. The implicit pose indicator is also a similar design which aims to extract comprehensive motion patterns to improve the animation performance. [1] X-portrait: Expressive portrait animation with hierarchical motion attention.**\\n\\nThanks for the comments and for introducing X-Portrait. We will cite it and discuss the difference between X-Portrait and ours:\\n- **1. Use of the Driven Video:** In Animate-X, we extract pose images from the driven video to serve as the primary source of motion. Given that a single pose image cannot provide image-level motion-related details (such as motion-induced deformations like body part overlap, occlusion, and overall motion patterns). In contrast, X-Portrait directly inputs the driven video into the model without any processing, which is following most of GAN-based animation methods. \\n- **2. Different Technical Approaches:** X-Portrait follows the approach of ControlNet, where the driving video is fed into an SD U-Net, and then a zero-conv layer is inserted into the main branch of the U-Net. In comparison, our IPI module first uses a pre-trained CLIP encoder to extract features from the driven video and then decouples image-level motion-related features for motion modeling. \\n- **3. Task Scope:** X-Portrait focuses on facial animation, but Animate-X handles full-body animation for universal characters, which includes anthropomorphic figures in cartoons and games.\\n\\nIn summary, Animate-X is different from X-Portrait in *Use of the Driven Video*, *Technical Approaches*, and *Task Scope*.\"}",
"{\"comment\": \"Dear Reviewer aHUH,\\n\\nThank you again for the great efforts and valuable comments. We hope you find the response satisfactory. As the discussion phase is about to close, we are eagerly looking forward to hearing from you regarding any further feedback. We will be more than happy to address any additional concerns you may have.\\n\\nBest,\\n\\nAnimate-X Authors\"}",
"{\"comment\": \"Thank you for your response and valuable comments. We are pleased to know that the responses to W1 and W2 are satisfactory and adequately address your concerns. Regarding your concerns about W3 and W5, we would like to provide the following explanations:\\n\\n**For W3:**\\n\\nWe have carefully studied X-Portrait and Meg-Actor, and we found that they employ Control Modules (in X-Portrait) and DrivenEncoder (in Meg-Actor) to directly extract features from RGB patches without adding any additional constraints. As you mentioned, this approach can easily lead to the inclusion of appearance features. In contrast, our approach first utilizes a pre-trained CLIP encoder to extract CLIP features from RGB patches, which prevents the extraction of appearance features **at the initial step**. Furthermore, we use DWpose data containing only motion information as a guiding signal (i.e., Q) to filter the CLIP features, allowing us to isolate motion-relative features and mitigate the influence of appearance features **in a second step**.\\n\\nAdditionally, regarding your concern about generalization (i.e., \\\"*as the appearance features of each character are distinct, the corresponding appearance-relative motion features also differ*\\\"), the used CLIP image encoder is trained on a large-scale dataset, which equips it to handle characters with different appearances. With the support of our proposed IPI, the features extracted from images by our model are solely related to motion, even if *the appearance of each character is distinct*, which greatly benefits our training process.\\n\\nIf you have any further questions or specific cases you would like to test, we would be more than happy to address them and conduct the necessary tests.\\n\\n***\\n\\n**For W5:**\\n\\nWe have read through the entire MimicMotion paper and checked its open source code many times, but we did not find the \\\"*pose image augmentation*\\\" in MimicMotion and only find the \\\"pose uncertainty estimation\\\", which is different from our proposed EPI. The pose uncertainty estimation is strictly aligned with human poses, which enables the model to generate clear representations of human hands and highlight other key parts of the human body. Our EPI explicitly implements pose augmentation to enhance the model's adaptability to non-human subjects, such as cartoon characters with long arms or those without limbs. If we have any omissions about the \\\"*pose image augmentation*\\\" in MimicMotion that we hope the reviewer can correct, we will conduct corresponding comparative experiments. \\n\\nAnyway, to the best of our ability, we provide the ablation study that we could think of in response to the reviewers' request. Specifically, we replaced our EPI with the \\\"**p**ose **u**ncertainty **e**stimation (PUE)\\\" from MimicMotion. The corresponding results are presented in the table below.\\n\\n| | PSNR* \\u2191 | SSIM \\u2191 | LPIPS \\u2193 | FID \\u2193 | FVD \\u2193 |\\n|----------|---------|--------|---------|--------|---------|\\n| with PUE | 11.95 | 0.404 | 0.526 | 53.83 | 1031.84 |\\n| with EPI (ours) | 13.30 | 0.433 | 0.454 | 32.56 | 893.31 |\\n\\nFrom the results, we can see that PUE from MimicMotion provides the limited improvement to our new task. We appreciate the efforts of MimicMotion in improving pose accuracy. This is a highly effective approach that has greatly inspired us, and we will highlight this point in our revised version. Thank you for your contribution to improving the quality of our manuscripts.\"}",
"{\"comment\": \"Thank you to the author(s) for the detailed explanation. Since the replies address most of my questions, I have decided to increase my score.\"}",
"{\"summary\": \"This paper focuses on the animation of non-human characters, addressing two main issues:\\n1)Sole pose skeletons lack image-level details.\\n2)Pose alignment in the self-driven reconstruction training strategy.\\nTo resolve these issues, the paper introduces a Pose Indicator, comprising an Implicit Pose Indicator and an Explicit Pose Indicator. Experimental results demonstrate that the proposed Animate-X achieves effective performance in character animation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.The authors introduce A2Bench, which is helpful for the evaluation of character animation.\\n2.Both qualitative and quantitative experiments are conducted to evaluate the performance of the proposed method.\", \"weaknesses\": \"1.Some parts of the writing can be quite confusing, words and sentences are bad orgnized. For example, in P5 L260, what exactly is in the pose pool? And how is it aligned with the reference?\\n2.The dataset includes 9,000 independently collected videos. Could you analyze these videos, and did other baselines use the same data for training? If not, could this lead to an unfair comparison?\\n3.The authors first identify the weaknesses of previous methods as a conflict between identity preservation and pose control. They further expand on this point by highlighting two specific limitations: the lack of image-level details in sole pose skeletons and pose alignment within the self-driven reconstruction training strategy. However, while the authors clearly state that differences in appearance between characters and humans can negatively impact animation, learning image-level details seems to contradict their viewpoint \\\"sole pose skeletons lack image-level details\\\", making this contribution appear more like a forced addition.\\n4.Additionally, the visualization in Figure 7 provided by the authors also supports w3. The inclusion or exclusion of the IPI appears to have minimal impact on the motion of the Ref image, and with IPI, part of the foot in the Ref image is even missing. This raises doubts about the effectiveness of the IPI module and seems inconsistent with the authors' stated motivation.\\n5.Pose augmentation has already been widely explored in existing methods, such as MimicMotion, which makes the innovation in this paper insufficient.\\n6.This paper lacks comparisons with similar methods, such as MimicMotion, which makes the experimental results less convincing.\\n[1]MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance\", \"questions\": \"See Weakness. If the authors can address all my concerns, I am willing to raise the score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response (2)\", \"comment\": \"| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **L1** \\u2193 | **LPIPS** \\u2193 | **FID** \\u2193 | **FID-VID** \\u2193 | **FVD** \\u2193 |\\n|---------------------------|---------------|--------------|------------------|---------------|-----------------|----------------|-----------------|\\n| Moore-AnimateAnyone | 9.86 | 0.299 | 1.58E-04 | 0.626 | 50.97 | 75.11 | 1367.84 |\\n| MimicMotion (*ArXiv24*) | 10.18 | 0.318 | 1.51E-04 | 0.622 | 122.92 | 129.40 | 2250.13 |\\n| ControlNeXt (*ArXiv24*) | 10.88 | 0.379 | 1.38E-04 | 0.572 | 68.15 | 81.05 | 1652.09 |\\n| MusePose (*ArXiv24*) | 11.05 | 0.397 | 1.27E-04 | 0.549 | 100.91 | 114.15 | 1760.46 |\\n| Unianimate (*ArXiv24*) | *11.82* | *0.398* | *1.24E-04* | *0.532* | *48.47* | *61.03* | *1156.36* |\\n| **Animate-X #** | 13.46 | 0.441 | 1.19E-04 | 0.468 | 37.76 | 40.19 | 933.43 |\\n| **Animate-X** | **13.60** | **0.452** | **1.02E-04** | **0.430** | **26.11** | **32.23** | **703.87** |\\n\\n**Table IV:** Quantitative comparisons with SOTAs on A\\u00b2Bench. The best and second-best results for each column are **bold** and *italicized*, respectively.\\n\\n***\\n\\n**Comment 3: The authors first identify the weaknesses of previous methods as a conflict between identity preservation and pose control. They further expand on this point by highlighting two specific limitations: the lack of image-level details in sole pose skeletons and pose alignment within the self-driven reconstruction training strategy. However, while the authors clearly state that differences in appearance between characters and humans can negatively impact animation, learning image-level details seems to contradict their viewpoint \\\"sole pose skeletons lack image-level details\\\", making this contribution appear more like a forced addition.**\\n\\nWe disagree with this comment. \\\"*sole pose skeletons lack image-level details* and \\\"*learning image-level details*\\\" are not contradictory but rather represent a cause-and-effect relationship. As shown in Figure 7 (response letter), previous methods extract only pose skeletons from original driving videos. The process can be represented as \\n\\n> video \\u2192 pose skeletons \\u2192 results.\\n\\nThese pose skeletons lack image-level motion-related details, *i.e.*, motion-induced deformations (*e.g.*, body part overlap and occlusion). These details play a crucial role in enhancing character animation, since personification cartoon characters have more unpredictable movement patterns compared to humans. Therefore, we design the IPI module specifically to extract these image-level motion-related details. The process can be represented as:\\n> **Step 1:** (*as same as the previous method*) video \\u2192 pose images \\n> **Step 2:** video \\u2192 IPI \\u2192 image-level motion-related features \\n> **Step 3:** pose images + image-level motion-related features \\u2192 results\\n\\n**Moreover**, the introduction of our IPI module is a core contribution of this paper which is not \\\"*a forced addition*\\\". In previous approaches, temporal information in driven videos was derived solely from multi-frame pose skeletons, often set against pure black backgrounds. The original RGB videos were discarded during the training process. While this method works well for human animation, where carefully designed pose skeletons align perfectly with human joints, it falls short for anthropomorphic characters whose skeletons differ significantly from humans. Thus, pose skeletons alone can **NOT** provide sufficient driving guidance, as they lack the motion-related details found only in the original driving video. This is where our IPI module makes a difference, extracting these richer details from the original video to improve the generalization of motion representation modeling.\"}",
"{\"comment\": \"Thank you for your response.\\n\\nFor W3\\n\\nCLIP features exhibit highly aggressive appearance-related semantic information, and many methods leverage them as injected appearance features.\\n\\nAuthors highlight the need to learn motion features tied to appearance while simultaneously learning motion features independent of appearance. This is contradictory and lacks persuasiveness.\", \"for_w5\": \"In MimicMotion/mimicmotion/dwpose/preprocess.py line 43.\"}",
"{\"comment\": \"Thanks a lot for the suggestions and valuable comments! We appreciate for your decision to raise the score! We are confident in our new task and approach we have proposed. We will release more details about this work at an appropriate time to further advance community development.\"}",
"{\"title\": \"Thank you for your valuable feedbacks.\", \"comment\": \"We sincerely thank **Reviewer #2 mbHE** for acknowledging the *\\\"introduced A\\u00b2Bench, the qualitative and quantitative experiments presented in our work\\\"*. We have re-uploaded our supplementary materials, which include the complete responses (at .zip/Animate-X_rebuttal_response_letter.pdf) along with the relevant figures and tables. The response letter is also contained in the main paper, after page 25. Below, we have addressed each question in detail and hope to clarify any concerns.\\n\\n**Comment 1: Some parts of the writing can be quite confusing, words and sentences are bad organized. For example, in P5 L260, what exactly is in the pose pool (Answer 1.1)? And how is it aligned with the reference? (Answer 1.2)**\\n\\n**Answer 1.1.** The pose pool mentioned in P5 L260 consists of all the unenhanced pose images extracted from our training dataset. Specifically, we use DWPose as the pose extractor to obtain skeleton images with a black background from the training videos.\\n\\n**Answer 1.2.** We have provided a detailed explanation of the pose pool and alignment process in Appendix A and Figure 4 (response letter). The alignment process can be organized into the following steps:\\n- **Step1:** Given a driving pose $I^p$, we randomly select an anchor pose $I^p_{anchor}$ from the pose pool.\\n- **Step2:** We then calculate the proportion of each body part between these two poses. For example, the shoulder length of $I^p_{anchor}$ divided by the shoulder length of $I^p$ might be 0.45, and the leg length of $I^p_{anchor}$ divided by the leg length of $I^p$ might be 0.53, and so on.\\n- **Step3:** We multiply each body part of the driven pose (*i.e.*, $I^p$) by the corresponding ratio (*e.g.*, 0.45, 0.53, *etc.*) to obtain the aligned pose (*i.e.*, $I^p_n$).\\n\\n***\\n\\n**Comment 2: The dataset includes 9,000 independently collected videos. Could you analyze these videos (Answer 2.1), and did other baselines use the same data for training (Answer 2.2)? If not, could this lead to an unfair comparison (Answer 2.3)?**\\n\\nThanks for your valuable comments. First, we would like to clarify that we have demonstrated the improvements in our approach stem from the IPI and EPI modules through the extensive and fair ablation experiments. Next, we will address each question in detail.\\n\\n**Answer 2.1.** Following the commonly used public human animation TikTok datasets which consists of videos downloaded from TikTok, we additionally collect 9,000 TikTok-like videos. The distribution of the additional data is similar to the TikTok dataset, primarily consisting of human dance videos.\\n\\n**Answer 2.2.** We notice that other baselines have also used their own collected data for model training. For example, UniAnimate uses 10,000 internal videos. Despite using more data than we did, Animate-X still improves the performance substantially, suggesting that these gains stem from the design of our modules rather than the data.\\n\\n**Answer 2.3.** Data is also the essential contribution of each respective work. The use of independently collected videos, including in our work, is transparently explained in the papers and has become a well-established convention in prior researches. **To address potential concerns**, we have trained our Animate-X solely on the public TikTok and Fashion benchmarks, **without incorporating any extra videos**. We have conducted the same experiments as presented in Table 1 (original submission), and reported results marked by # in **Table IV**. As shown in **Table IV**, our method still outperforms other approaches, which further demonstrates that the improvements in Animate-X are driven by the IPI and EPI modules, rather than the use of additional training data.\"}",
"{\"comment\": \"Thank you for your detailed reply, i am willing to raise my score.\\n\\nHowever, while the authors emphasize the performance gain, the reviewer is concerned about the novelty as echoing existing tricks in new tasks is not a good way to advance the research community.\"}",
"{\"comment\": \"Thank you for taking the time to review our revisions and for your willingness to raise the score to 8.\\n\\nWe agree that creating 3D models and rendering them with predefined actions using tools like Blender and Maya is a superior approach for developing a character benchmark. In fact, we are currently making preparations to utilize 3D models to produce animated videos that will showcase a wider array of motion patterns and more complex scenarios to support our benchmark.\\n\\nOnce again, we appreciate your suggestions and your acknowledgment of the authors' explanations regarding the strong performance, writing quality, and the design of the pose pool.\"}",
"{\"title\": \"Further question about selection of $I_{\\\\mathcal{anchor}}^{p}$\", \"comment\": [\"Author(s) describe how transformed poses $I_n^p$ is generated during training. But I still have some corcerns regarding **how anchor poses** are selected? Specifically, are the anchor poses chosen from the entire training set or from a subset?\", \"If they are randomlly selected from the entire training set, how does the static distribution of rescaling ratio (e.g., the shoulder length of $I_{\\\\mathcal{anchor}}^p$ divided by the shoulder length of $I^p$) look like?\", \"If they are selected from a subset, what is the number of anchor poses, and what is the selection rule?\"]}",
"{\"title\": \"Thank you for your valuable feedbacks.\", \"comment\": \"We sincerely thank **Reviewer #1 aHUH** for acknowledging the *``notable improvements of Animate-X''* and the *``comprehensive experiments and ablation studies presented in our work''*. We have re-uploaded our supplementary materials, which include the complete responses (at .zip/Animate-X_rebuttal_response_letter.pdf) along with the relevant figures and tables. The response letter is also contained in the main paper, after page 25. Below, we have addressed each questions in detail and hope to clarify any concerns.\\n\\n***\\n\\n**Comment 1: No video samples from A$^2$Bench are provided; only selected frames are shown in the paper. Given that the generated videos still struggle with maintaining strict logic and good spatial and temporal consistency, I question the rationale for using T2I + I2V to generate benchmark videos.**\\n\\n Thanks. We have provided video samples of A$^2$Bench in the updated *Supplementary Materials* (.zip/for\\\\_reviewer\\\\_aHUH/xxx.mp4). We kindly invite the reviewer to check these videos. Below, we address the reviewer's concerns regarding *``strict logic''* and *``good spatial and temporal consistency''* using T2I + I2V:\\n- **1. Strict logic:** The choice to use T2I models stems from a clear need: current T2V models often struggle with imaginative and logically complex inputs, such as \\\"*personified refrigerators*\\\" or \\\"*human-like bees*\\\". T2I models offer strict logic and imagination in these scenarios, allowing to generate reasonable cartoon characters as the ground-truth. To prove this point, as shown in Table I, we assessed the semantic accuracy of A$^2$Bench using CLIP scores, which are commonly used to evaluate whether the semantic logic of images and text is strictly aligned (*i.e.*, Does the generated ``*human-like bee*'' maintain the visual essence of a bee while seamlessly incorporating human-like features, such as hands and feet?). For comparison, we also evaluate the publicly available TikTok and Fashion datasets using the same metric. These experimental results demonstrate that A$^2$Bench achieves the highest level of strict logical alignment. **Furthermore**, we input the images from A$^2$Bench into a multimodal large language model (MLLM) with logical reasoning, such as QWen, to conduct a logical analysis of the visual outputs generated by the T2I model. The results, shown in Figure 1 (response letter), reveal that the image descriptions answered by the MLLM closely aligns with our input prompts, which verifies again that the data in A$^2$Bench maintains strict logic.\\n- **2. Good spatial and temporal consistency:** We have incorporated several metrics from VBench, such as *Background Consistency*, *Motion Smoothness*, *Aesthetic Quality*, and *Image Quality*, to evaluate the spatial and temporal consistency of the videos in A\\u00b2Bench. As shown in **Table I**, A$^2$Bench outperforms the TikTok dataset across all metrics and achieves comparable scores to the Fashion dataset, both of which are collected from real-world scenarios. This demonstrates that the videos generated by our method exhibit a similar level of spatial and temporal consistency to real-world videos.\\n\\n In summary, to our best knowledge, T2I+I2V is the reasonable and effective solution currently available for automating the production of videos with anthropomorphic cartoon characters. Specifically, the T2I model can understand the prompt and generate well-aligned high-quality images with strict logic, while the I2V model can preserve the identity of the characters in the image and generate videos with good spatial and temporal consistency. Moreover, the T2I step allows human artists to check and make manual modification to the cartoon characters if necessary before generating the videos. \\n\\n| **Benchmark** | **CLIP Score** | **Background Consistency** | **Motion Smoothness** | **Aesthetic Quality** | **Image Quality** |\\n|-----------------|----------------|----------------------------|------------------------|------------------------|--------------------|\\n| TikTok | *26.92* | 94.10% | 99.05% | *55.14%* | *62.54%* |\\n| Fashion | 20.18 | **98.25%** | **99.45%** | 49.62% | 49.96% |\\n| A\\u00b2Bench | **33.24** | *96.66%* | *99.39%* | **69.86%** | **69.32%** |\\n\\n**Table I:** Quantitative results of different benchmarks. The best and second-best results for each column are **bold** and *italicized*, respectively.\"}",
"{\"summary\": \"This paper highlights that character animation models trained exclusively on human-only datasets struggle to learn motion patterns from driving videos, often leading to overfitting on the driving pose and poor generalization to anthropomorphic characters.\\n\\nTo address this issue, the authors propose a novel character animation framework called Animate-X, which incorporates two Pose Indicators. The Implicit Pose Indicator extracts motion and integrates it with CLIP features, while the Explicit Pose Indicator supports an augmentation pipeline during training that encourages the model to learn motion from misaligned pose sequences.\\n\\nAdditionally, a new benchmark is established for evaluating anthropomorphic characters. Experiments across multiple datasets demonstrate the effectiveness of the proposed method for animating anthropomorphic characters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a new augmentation method that enhances pose robustness for character animation techniques.\", \"A novel module is proposed to integrate the driving pose with the reference image without relying on a reference network.\", \"A new benchmark is established for evaluating anthropomorphic characters.\", \"The quality of animation results is good, even reference characters do not have leg or arm.\"], \"weaknesses\": [\"The paper lacks a detailed analysis of the construction of the augmentation pool, making it difficult to reproduce the method.\", \"There is insufficient in-depth analysis of the model design, such as why the Implicit Pose Indicator (IPI) outperforms the reference network, which has more learnable parameters.\", \"Most styles in the A2Bench benchmark are \\\"3D render style\\\"; the benchmark should include a wider variety of visual styles.\"], \"questions\": [\"Could the authors provide more details on the construction of the pose pool and alignment pool, such as the pool sizes and how poses are selected from the training set?\", \"Comparing the results in Table 4 and Table 1, Animate-X outperforms the baselines even without pose augmentation (EPI). Could the authors provide a deeper analysis of why the Implicit Pose Indicator (IPI), with fewer parameters, outperforms the reference network?\", \"What happens if the reference pose differs significantly from the candidates in the pose pool and alignment pool? The authors should provide a robustness analysis for this scenario and consider adding a difficulty level split for A2Bench.\", \"Could aligning the driving pose to a \\\"standard\\\" one in the pose pool further improve generation quality?\", \"In the supplementary materials, the authors show results in various styles, yet most styles in A2Bench are in \\\"3D render style.\\\" Would it be possible to add a \\\"style trigger word\\\" in the prompt template to diversify and strengthen the benchmark?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposed Animate-X, a universal animation framework based on diffusion models. The key insight of this work is that existing image animation frameworks are only focused on the human images and fail to extract the movement pattern of the driving video, leading to the rigid retargeting of the driving pose to the target reference image. The authors propose two pose indicators to address this issue, which can capture comprehensive motion patterns from the driving video. The implicit pose indicator helps retrieve relevant features from the driving video, while the explicit one simulates the unaligned driving poses in the inference stage. To evaluate the approaches, the authors also propose a new benchmark which contains in-the-wild and unmatched driving sequence/reference image pairs. Experiments show that the proposed method outperforms state-of-the-art methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation of this work is clear, it comes from an in-depth analysis of the failure cases of existing works. The alignment between the driving signal and reference image is critical to the fidelity of character animation. The authors propose an effective method to tackle this problem.\\n2. The experimental results, especially the video results, are reasonable and interesting. The proposed method shows state-of-the-art performance and outperforms baselines in animating in-the-wild characters. This indicates that the training data is well utilised and shows that the proposed method helps improve the generalisation ability of the animation model.\\n3. The evaluation benchmark is valuable to the research community. It can help follow-up works measure their methods comprehensively.\", \"weaknesses\": \"1. The backbone of this work remains unchanged, it is quite similar to the prior works like your reference AnimateAnyone and MagicAnimate, which makes this work a straightforward extension of existing works and thus reduces the contribution of this paper.\\n2. Leveraging driving videos to boost the animation performance has been already explored in a few prior works like [1]. The implicit pose indicator is also a similar design which aims to extract comprehensive motion patterns to improve the animation performance.\\n3. The explicit pose indicator is a little bit confusing because I think this module is an augmentation of the driving pose sequences. Therefore, the novelty of the proposed method is not very significant. It is reasonable that the augmentation can break the strong correspondence between the driving video and motion representation. What is the advantage of this training time rescale augmentation and over the test time pose alignment? Are there any ablation studies about this? \\n4. From the results of the animation of anthropomorphic characters, the example of a banana shows that although the animation result looks like a banana, the motion precision is decreased. Therefore, I think the implicit pose indicator could harm the motion precision. The authors could conduct more experiments to study this issue.\\n\\n[1] X-portrait: Expressive portrait animation with hierarchical motion attention\", \"questions\": \"Does this model still use any input videos in the inference stage? I am asking this question because there are no input driving videos in the \\u201cAnimating anthropomorphic characters\\u201d section of the supplementary materials. Could the author explain the inference setting? If there is a corresponding driving video, it is better to also include them into the results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response (3)\", \"comment\": \"**Comment 3: What happens if the reference pose differs significantly from the candidates in the pose pool and alignment pool? The authors should provide a robustness analysis for this scenario.**\\n\\nThanks. We are a bit unsure whether the reviewer's question refers to the training process or the inference process, so we have analyzed both situations. We hope it helps clarify any confusion.\\n- **1. During training:** Significant differences between the reference pose and the candidates in the pose and alignment pools can actually benefit training by enhancing the model's robustness. Different poses enable the model to understand the difference between complex reference image inputs and driven pose video inputs. For example, in the first row of Figure 1 (original submission *i.e.*, the teaser), we use a human skeleton to drive a limb-less character. To achieve such capability, we need to simulate extreme scenarios during training. Therefore, when the reference pose differs significantly from the candidates in the pose pool and alignment pool during training, it enhances the robustness of the model. \\n- **2. During inference:** Even when the reference pose differs significantly from the candidates in the pose and alignment pools, our model is still able to produce reasonable results, which is one of the core challenges addressed in this paper. Our pose pool and alignment pool are designed to encompass a wide range of local deformations, while the IPI module focuses on implicit motion modeling. This combination allows the model to learn generalized motion patterns from videos, rather than being constrained to specific actions. Thus, regardless of the input driver video or its corresponding pose, Animate-X ensures stable and reliable generation without excessive collapse.\\n\\n***\\n\\n**Comment 4: Could aligning the driving pose to a \\\"standard\\\" one in the pose pool further improve generation quality?**\\n\\nYes. Aligning the driving pose to a \\\"*standard*\\\" one can further improve generation quality. This is because the \\\"*aligning*\\\" operation simplifies the complexity of the animation process, making it easier for the model to generate accurate results.\\n\\n***\\n\\n**Comment 5: Consider adding a difficulty level split for A$^2$Bench.**\\n\\nThanks for your valuable suggestion. We have added the difficulty level split for Animate-X. As shown in Figure 10 (response letter), we categorized the videos in A$^2$Bench into three difficulty levels: Level 1, Level 2, and Level 3. The classification is based on their appearance characteristics. \\n- **First**, we classify characters that have body shapes and other appearance features similar to humans, as shown in the first row of Figure 10 (response letter), into the easiest, Level 1 category. These characters are generally simpler to drive, produce fewer artifacts, and have better motion consistency. \\n- **In contrast**, characters that maintain more distinct structural features from humans, such as dragons and ducks in the third row of Figure 10 (response letter), are classified into the most difficult Level 3 category. These characters often preserve their original structures (*e.g.*, a duck's webbed feet and wings), which makes balancing identity preservation and motion consistency more challenging. To ensure identity preservation, the consistency of motion may be compromised, and vice versa. Additionally, images involving interactions between characters, objects, environments, and backgrounds are also placed in Level 3, as they increase the difficulty for the model to distinguish the parts that need to be driven from those that do not. \\n- **Videos in between these two categories**, like those in the second row of Figure 10 (response letter), are classified as Level 2. These characters often strike a good balance between anthropomorphism and their original form, making them easier to animate with better motion consistency than Level 3 characters and more interesting results than Level 1 characters. \\n\\n***\\n\\n**Comment 6: Most styles in the A$^2$Bench benchmark are \\\"3D render style\\\"; the benchmark should include a wider variety of visual styles. In the supplementary materials, the authors show results in various styles, yet most styles in A$^2$Bench are in \\\"3D render style.\\\" Would it be possible to add a \\\"style trigger word\\\" in the prompt template to diversify and strengthen the benchmark?**\\n\\nFollowing your suggestions, we have added style trigger words such as *\\\"Watercolor Painting\\\"*, *\\\"Cyberpunk Style\\\"*, *\\\"Van Gogh\\\"*, *\\\"Ukiyo-E\\\"*, *\\\"Pixel Art\\\"*, and so on. Some results are shown in Figure 11 (response letter), which indeed enriches the benchmark and strengthens its diversity. Please see `(.zip/for_reviewer_feUz/more_style/xxx.mp4)` for video results. Thank you for your valuable suggestions.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Thank you for your valuable feedbacks.\", \"comment\": [\"We sincerely thank **Reviewer #3 feUz** for acknowledging *\\\"the new method, benchmark, and animation results presented in our work\\\"*. We have re-uploaded our supplementary materials, which include the complete responses (at `.zip/Animate-X_rebuttal_response_letter.pdf`) along with the relevant figures and tables. The response letter is also contained in the main paper, after page 25. Below, we have addressed each question in detail and hope to clarify any concerns.\", \"**Comment 1: The paper lacks a detailed analysis of the construction of the augmentation pool, making it difficult to reproduce the method. Could the authors provide more details on the construction of the pose pool and alignment pool, such as the pool sizes and how poses are selected from the training set?**\", \"Thanks for your feedback. Yes, we present the detailed analysis of the construction of the augmentation pool. Please refer to Figure 4 (response letter) or Figure 8 (original submission) for an illustration of the following process:\", \"**Step1:** We first construct the pose pool using the DWPose extractor. The pose pool is composed of pose skeletons (*i.e.*, pose images);\", \"**Step2:** Given a driving pose $I^p$, we randomly select an anchor pose $I^p_{anchor}$ from the pose pool.\", \"**Step3:** We then calculate the proportion of each body part between these two poses. For example, the shoulder length of $I^p_{anchor}$ divided by the shoulder length of $I^p$ might be 0.45, and the leg length of $I^p_{anchor}$ divided by the leg length of $I^p$ might be 0.53, and so on.\", \"**Step4:** We multiply each body part of the driven pose (*i.e.*, $I^p$) by the corresponding ratio (*e.g.*, 0.45, 0.53, *etc.*) to obtain the aligned pose (*i.e.*, $I^p_n$).\", \"**Step5:** Then we define a set of keypoint rescaling operations, including modifying the length of the body, legs, arms, neck, and shoulders, altering face size, adding or removing specific body parts, *etc*. These transformations are stored in a rescale pool.\", \"**Step6:** We apply the selected transformations on the aligned pose $I^p_{realign}$ to obtain the final transformed poses $I^p_n$.\"]}",
"{\"title\": \"Response (2)\", \"comment\": \"**Comment 2: Here is insufficient in-depth analysis of the model design, such as why the Implicit Pose Indicator (IPI) outperforms the reference network, which has more learnable parameters. Comparing the results in Table 4 and Table 1, Animate-X outperforms the baselines even without pose augmentation (EPI). Could the authors provide a deeper analysis of why the Implicit Pose Indicator (IPI), with fewer parameters, outperforms the reference network?**\\n\\nSure, IPI outperforms the reference network because the latter focuses on extracting content features from reference images, while IPI focuses on motion, aiming to capture a universal motion representation. The reference network intends to capture all appearance details of the reference image. In contrast, IPI only models the motion-related image-level detais, so IPI can employ a smaller network to do the job. We provide a detailed explanation of how IPI improves the performance as follows:\\n- **1. Reference network:** From the results using current methods using the reference network, *e.g.*, MimicMotion, we observe an inherent trade-off between overly precise poses and low fidelity to reference images. While the reference network attempts to address this by extracting additional appearance information from the reference image to improve fidelity through the denoising model, Figure 9 (response letter) illustrates that the reference network based approach remains insufficient, as precise human poses still dominate.\\n- **2. IPI:** To address the observed limitations, we shifted our focus from appearance information to motion as the critical factor in our work. Simple 2D pose skeletons, constructed by connecting sparse keypoints, lack the image-level details needed to capture the essence of the reference video, such as motion-induced deformations (*e.g.*, body part overlap and occlusion). This absence of image-level details causes previous methods, even those using a reference network, to produce results with consistent poses but compromised identity fidelity. To overcome this issue, we introduced the IPI module to recover these missing **motion-related** image-level details. Specifically, IPI employs a pretrained CLIP encoder to extract features from the driving image, followed by a lightweight extractor ($P$) to isolate the motion-related details. This approach enables IPI to outperform the reference network, which, despite having more learnable parameters, unable to capture these essential motion-related features.\\n\\nAs shown in Figure 9 (response letter), methods utilizing reference networks, such as AnimateAnyone, primarily focus on preserving colors from the reference image, as demonstrated by the white hat and yellow body of the potato in the first row. However, these methods cannot maintain the identity of the reference image, often generating videos that deviate from the original image, such as forcefully inserting human limbs onto potatoes. It highlights the limitation of reference networks, which prioritize color consistency over identity preservation, leading to weaker performance on quantitative metrics like SSIM, L1, and FID. \\n\\nIn contrast, as shown in Figure 7 (original submission), even without the EPI module, Animate-X successfully generates a panda that retains the identity of the reference image. This leads to substantial improvements in SSIM, L1, and FID compared to baselines that rely on reference networks, even without the EPI module.\"}",
"{\"title\": \"Further question about comment 5.\", \"comment\": \"I appreciate the effect of the difficulty level split provided in the response letter. Could the author(s) please provide additional evaluation results (maybe 1~2 methods) for each subset?\"}",
"{\"summary\": \"This work presents an animation framework capable of animating anthropomorphic characters, along with an accompanying benchmark for animated anthropomorphic characters. Specifically, the framework introduces an Implicit Pose Indicator and an Explicit Pose Indicator to provide rich pose guidance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The visual results of Animate-X demonstrate notable improvements across various characters compared to existing animation methods.\\n2. Comprehensive experiments and ablation studies are presented.\", \"weaknesses\": \"1. No video samples from A2Bench are provided; only selected frames are shown in the paper. Given that the generated videos still struggle with maintaining strict logic and good spatial and temporal consistency, I question the rationale for using T2I + I2V to generate benchmark videos. Additionally, the benchmark lacks detailed information, such as video length and frame rate. Were any additional motion prompts used to generate videos from images? If so, what is their diversity and complexity?\\n2. The necessity of a pose pool and the selection of an anchor pose image need clarification. What operations are involved in the \\\"align\\\" process, specifically regarding translation and rescaling? Why not use random translation and rescaling instead of relying on an anchor pose image?\\n3. The effectiveness of the Implicit Pose Indicator (IPI) is also in question. The motivation for the IPI is that sparse keypoints lack image-level details, while IPI aims to retrieve richer information. However, Tables 7 and 8 indicate that Animate-X achieves comparable performance to Animate-Anyone and UniAnimate on human videos. This suggests that the IPI does not provide any benefits for human animation.\", \"questions\": \"Please address the concerns in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response for *further question about comment 5*\", \"comment\": \"Following your suggestion, we evaluate the results of Animate-X and UniAnimate for each subset and present the results below:\\n\\n| | PSNR* \\u2191 | SSIM \\u2191 | L1 \\u2193 | LPIPS \\u2193 | FID \\u2193 | FID-VID \\u2193 | FVD \\u2193 |\\n|-------------------|-------|-------|----------|--------|-------|---------|---------|\\n| Animate-X-level1 | 13.96 | 0.461 | 9.67E-05 | 0.418 | 24.24 | 31.37 | 681.53 |\\n| Animate-X-level2 | 13.74 | 0.457 | 9.82E-05 | 0.429 | 26.12 | 32.19 | 693.63 |\\n| Animate-X-level3 | 13.17 | 0.442 | 1.11E-04 | 0.437 | 27.34 | 35.64 | 721.41 |\\n| UniAnimate-level1 | 11.93 | 0.413 | 1.14E-04 | 0.521 | 42.39 | 52.14 | 1120.45 |\\n| UniAnimate-level2 | 11.89 | 0.408 | 1.20E-04 | 0.526 | 46.27 | 58.53 | 1147.34 |\\n| UniAnimate-level3 | 10.91 | 0.379 | 1.35E-04 | 0.549 | 56.58 | 65.39 | 1204.53 |\\n\\nWe can see that as the difficulty increases, each evaluation result shows a decline. Thank you for your feedback and we will include the above results in the supplementary materials.\"}",
"{\"title\": \"Looking forward to your improving the final score in the system\", \"comment\": \"Dear Reviewer aHUH,\\n\\nThank you for your efforts and valuable comments on improving the quality of our manuscript. We kindly remind that the discussion time is coming to an end. We are also glad to hear that you are willing to raise the score, and we look forward to your editing the final score in the system. Thank you again for your efforts and please feel free to contact us if you have any further questions.\\n\\nBest regards,\\n\\nThe Authors of Animate-X\"}",
"{\"metareview\": \"The paper received very positive ratings from the reviewers. They highlight improved results, new ideas, the introduction of a new benchmarks, as well as the ability of the method to animate objects not having a distinct skeleton structure. The reviewers also pointed out that at times the paper can be hard to follow, or can be lacking the necessary analysis. This led to a lengthy discussion between the authors and reviewers during which most of concerns were addressed, leading to improved scores. The AC agrees, the manuscript presents an interesting piece of work. Congrats!\", \"additional_comments_on_reviewer_discussion\": \"There was a very healthy discussion between the reviewers and the authors. New ablations were shown, unclear parts were explained, certain analyses were given.\"}",
"{\"comment\": \"Thanks for your response.\\n\\n**For W3:**\\n\\n1. Our discussion on CLIP\\u2019s ability to avoid appearance-based information extraction refers specifically to X-Portrait and Meg-Actor. They use raw RGB data, which fully preserves all appearance information.\\n\\n2. The CLIP encoder does retrieve richer information, including the appearance information, from driven videos. But in IPI, we use DWPose data containing only motion information as a guiding signal (i.e., Q) to filter the CLIP features, allowing the model to isolate motion-relative features and mitigate the influence of appearance features. This is a trade-off based on the model\\u2019s learning capability, where the newly learned motion representation has stronger modeling ability for general motion. In contrast, pure motion features represented by pose skeletons are completely decoupled from video appearance. However, their strong correspondence with the human body makes it challenging to generalize to non-human subjects. This is one of the reasons we aim to enhance motion features. Our ablation experiments show that using the IPI and an appropriate training strategy does not cause the model to incorporate appearance information from the reference video. \\n\\n***\\n\\n**For W5:**\\n\\nFirst, the purpose of this widely used function (MimicMotion/mimicmotion/dwpose/preprocess.py line 43) is to **align** the driven pose with the reference image, while our EPI serves as an augmentation during the training process to **prevent alignment**. Second, this codebase only includes inference codes, and the MimicMotion paper (Page 4, Sec. 3.2, arXiv:2406.19680) does not mention the use of this function during training or inference. Since the training data is already aligned (*i.e.*, the reference image is randomly sampled from the same video), we believe the relevance of this function to training is minimal. Therefore, we consider these as two distinct contributions from different perspectives, which are not in conflict.\"}",
"{\"title\": \"Response (3)\", \"comment\": \"**Comment 4: From the results of the animation of anthropomorphic characters, the example of a banana shows that although the animation result looks like a banana, the motion precision is decreased. Therefore, I think the implicit pose indicator could harm the motion precision (Answer 4.1). The authors could conduct more experiments to study this issue (Answer 4.2).**\\n\\n**Answer 4.1:** First of all, we need to clarify that the implicit pose indicator does not harm motion precision. We have demonstrated that adding the IPI module to the baseline results in improvements across all quantitative metrics, highlighting its contributions to every aspect of animation through extensive ablation experiments (i.e., **Table III**).\\n\\n| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **L1** \\u2193 | **LPIPS** \\u2193 | **FID** \\u2193 | **FID-VID** \\u2193 | **FVD** \\u2193 |\\n|--------------------|---------------|----------------|------------------|-----------------|-----------------|----------------|-----------------|\\n| w/o IPI | 13.30 | 0.433 | 1.35E-04 | *0.454* | 32.56 | 64.31 | 893.31 |\\n| w/o LQ | *13.48* | 0.445 | 1.76E-04 | *0.454* | 28.24 | 42.74 | 754.37 |\\n| w/o DQ | 13.39 | 0.445 | **1.01E-04** | 0.456 | 30.33 | 62.34 | 913.33 |\\n| **Animate-X** | **13.60** | **0.452** | *1.02E-04* | **0.430** | **26.11** | **32.23** | **703.87** |\\n\\n**Table III:** Quantitative results of the ablation study on IPI. The best and second-best results for each column are **bold** and *italicized*, respectively.\\n\\n**Answer 4.2:** As shown in Figure 12 (response letter), we have conducted additional experiments on the banana case and provided a detailed discussion. Specifically, we input the banana image and the driven poses into the model without the IPI module to generate the results. As shown in Figure 12 (response letter), we observe that without the IPI module, the model generates the human-like arms, which was not the intended outcome. In contrast, Animate-X (with IPI) prioritized preserving the banana's identity and avoiding obvious artifacts. We believe this trade-off is reasonable and aligns with the limitation discussed in our paper: the excessive sacrifices in identity preservation in favor of strict pose consistency.\\n\\nTo balance pose consistency and identity preservation, we assigned an appropriate weight to the IPI module. In this way, we generated the preferrable result, as shown in the last row of Figure 12 (response letter). To allow users to control the trade-off, we made this weight an adjustable parameter. Additionally, we conducted detailed experiments and analysis of this weight, as presented in Figure 12 (original submission).\\n\\n***\\n\\n**Comment 5: Does this model still use any input videos in the inference stage (Answer 5.1)? I am asking this question because there are no input driving videos in the \\u201cAnimating anthropomorphic characters\\u201d section of the supplementary materials. Could the author explain the inference setting (Answer 5.2)? If there is a corresponding driving video, it is better to also include them into the results (Answer 5.3).**\\n\\n**Answer 5.1:** Yes, this model can still use any input videos during the inference stage.\\n\\n**Answer 5.2:** Yes. As shown in Figure 13 (response letter), during inference, our method takes a reference image and a driven video as input and outputs an animated video that maintains the same identity as the reference image and the same motion as the driven video.\\n\\n**Answer 5.3:** Thanks. Following your suggestions, we have included the corresponding driving video in the results. Please see the videos in (`.zip/for_reviewer_ESK8/for_comment_5/xxx.mp4`).\"}",
"{\"title\": \"We sincerely thank all reviewers for their careful reading and constructive comments.\", \"comment\": [\"We sincerely thank all reviewers for their careful reading and constructive comments, which have been invaluable in improving our work. We also deeply appreciate the reviewers\\u2019 acknowledgment of:\", \"The notable improvements across various characters compared to existing animation methods (aHUH, feUz, ESK8)\", \"Comprehensive experiments and ablation studies (aHUH and mbHE)\", \"The introduction of A2Bench (mbHE, feUz, ESK8)\", \"The proposed novel module (feUz, ESK8)\", \"In response to the reviewers' comments, we have re-uploaded our supplementary materials, which include the **complete responses** (at **.zip/Animate-X_rebuttal_response_letter.pdf**) along with the relevant figures and tables. The response letter is also contained in the **main paper, after page 25**. We sincerely invite the reviewers to refer to these materials for a better reading experience. We hope that our response satisfactorily addresses your concerns.\"]}",
"{\"title\": \"Thanks a lot for the suggestions and valuable comments!\", \"comment\": \"Thanks a lot for the suggestions and valuable comments! We are pleased to know that our responses have addressed your questions. We appreciate for your decision to raise the score!\"}",
"{\"comment\": \"Dear Reviewer feUz,\\n\\nThank you again for the great efforts and valuable comments. We hope you find the response satisfactory. As the discussion phase is about to close, we are eagerly looking forward to hearing from you regarding any further feedback. We will be more than happy to address any additional concerns you may have.\\n\\nBest,\\n\\nAnimate-X Authors\"}",
"{\"title\": \"Response (4)\", \"comment\": \"| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **L1** \\u2193 | **LPIPS** \\u2193 | **FID** \\u2193 | **FID-VID** \\u2193 | **FVD** \\u2193 |\\n|---------------------------|---------------|--------------|------------------|---------------|-----------------|----------------|-----------------|\\n| MimicMotion (*ArXiv24*) | 12.66 | 0.407 | 1.07E-04 | 0.497 | 96.46 | 61.77 | 1368.83 |\\n| **Animate-X** | **14.10** | **0.463** | **8.92E-05** | **0.425** | **31.58** | **33.15** | **849.19** |\\n\\n**Table VI:** Quantitative comparisons with MimicMotion on A\\u00b2Bench in the self-driven setting. The best results for each column are **bold**.\\n\\n***\\n\\n| **Method** | **Moore-AA** | **MimicMotion** | **ControlNeXt** | **MusePose** | **Unianimate** | **Animate-X** |\\n|---------------------------|--------------|-----------------|-----------------|--------------|----------------|--------------------|\\n| **Identity preservation \\u2191** | 60.4% | 14.8% | 52.0% | 31.3% | 43.0% | **98.5%** |\\n| **Temporal consistency \\u2191** | 19.8% | 24.9% | 36.9% | 43.9% | 81.1% | **93.4%** |\\n| **Visual quality \\u2191** | 27.0% | 17.2% | 40.4% | 40.3% | 79.3% | **95.8%** |\\n\\n**Table VII:** User study results. The best results for each metric are **bold**.\\n\\n***\\n\\n| **Method** | **L1** \\u2193 | **PSNR*** \\u2191 | **SSIM** \\u2191 | **LPIPS** \\u2193 | **FVD** \\u2193 |\\n|---------------------------|---------------|---------------|--------------|---------------|-----------------|\\n| MimicMotion (*ArXiv24*) | 5.85E-04 | 14.44 | 0.601 | 0.414 | 232.95 |\\n| **Animate-X** | **2.70E-04** | **20.77** | **0.806** | **0.232** | **139.01** |\\n\\n**Table VIII:** Quantitative comparisons with MimicMotion on the TikTok dataset. The best results for each metric are **bold**.\\n\\n***\\n\\n| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **LPIPS** \\u2193 | **FVD** \\u2193 |\\n|---------------------------|---------------|--------------|---------------|-----------------|\\n| MimicMotion (*ArXiv24*) | 27.06 | 0.928 | 0.036 | 118.48 |\\n| **Animate-X** | **27.78** | **0.940** | **0.030** | **79.4** |\\n\\n**Table IX:** Quantitative comparisons with MimicMotion on the Fashion dataset. The best results for each metric are **bold**.\"}",
"{\"title\": \"Response (3)\", \"comment\": \"**Comment 4: Additionally, the visualization in Figure 7 provided by the authors also supports w3. The inclusion or exclusion of the IPI appears to have minimal impact on the motion of the Ref image, and with IPI, part of the foot in the Ref image is even missing. This raises doubts about the effectiveness of the IPI module and seems inconsistent with the authors' stated motivation.**\\n\\nThanks for the comment. The *\\\"missing foot\\\"* is caused by the video not being fully displayed in our submission, rather than an issue with our IPI module. We have added more frames of the video in Figure 8 (response letter). Please refer to the video result in `(.zip/for_reviewer_mbHE/full_frame_of_figure7.mp4)`. As shown in Figure 8 (response letter), in the initial frames, the foot is present and highly consistent with the reference image. Subsequently, the driven pose image begins to perform a leg-merging motion, with the distance between the legs gradually decreasing. To allow the anthropomorphic bamboo character to follow this motion, it also gradually merges its legs, giving the appearance of the *\\\"missing foot\\\"*.\\n\\n***\\n\\n**Comment 5: Pose augmentation has already been widely explored in existing methods, such as MimicMotion, which makes the innovation in this paper insufficient.**\\n\\nThe primary contribution of our work is animating anthropomorphic figures using two new modules, IPI and EPI, which go beyond simple *\\\"pose augmentation\\\"*. Pose augmentation is a training strategy and is not exclusive to any specific method. By itself, it cannot solve the animation issue in our work. The IPI and EPI modules designed to handle figures beyond human and human pose are novel to address the specific challenges in animating anthropomorphic figures. We then provide a detailed explanation of the concept beyond \\\"*Pose Augmentation*\\\". Please refer to Figure 4 (response letter) or Figure 8 (original submission) for an illustration of the following process:\\n- **Step1:** We first construct the pose pool using the DWPose extractor. The pose pool consists of pose skeletons (*i.e.*, pose images).\\n- **Step2:** Given a driving pose $I^p$, we randomly select an anchor pose $I^p_{anchor}$ from the pose pool.\\n- **Step3:** We then calculate the proportion of each body part between these two poses. For example, the shoulder length of $I^p_{anchor}$ divided by the shoulder length of $I^p$ might be 0.45, and the leg length of $I^p_{anchor}$ divided by the leg length of $I^p$ might be 0.53, and so on.\\n- **Step4:** We multiply each body part of the driven pose (*i.e.*, $I^p$) by the corresponding ratio (*e.g.*, 0.45, 0.53, *etc.*) to obtain the aligned pose (*i.e.*, $I^p_n$).\\n- **Step5:** Then we define a set of keypoint rescaling operations, including modifying the length of the body, legs, arms, neck, and shoulders, altering face size, adding or removing specific body parts, *etc*. These transformations are stored in a rescale pool.\\n- **Step6:** We apply the selected transformations on the aligned pose $I^p_{realign}$ to obtain the final transformed poses $I^p_n$.\\n\\n*** \\n\\n**Comment 6: This paper lacks comparisons with similar methods, such as MimicMotion, which makes the experimental results less convincing. [1]MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance.**\", \"we_have_already_conducted\": \"**(1) Quantitative comparisons** with MimicMotion in Tables 1, 2, 7, and 8 in the original submission. \\n**(2) Qualitative comparisons** with MimicMotion in Figure 5 and the videos in the original *Supplementary Materials*. \\n**(3) The user study comparison** with MimicMotion in Table 3 in the original submission. \\nFor your convenience, we highlight and summary these results below.\\n\\n| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **L1** \\u2193 | **LPIPS** \\u2193 | **FID** \\u2193 | **FID-VID** \\u2193 | **FVD** \\u2193 |\\n|---------------------------|---------------|--------------|------------------|---------------|-----------------|----------------|-----------------|\\n| MimicMotion (*ArXiv24*) | 10.18 | 0.318 | 1.51E-04 | 0.622 | 122.92 | 129.40 | 2250.13 |\\n| **Animate-X** | **13.60** | **0.452** | **1.02E-04** | **0.430** | **26.11** | **32.23** | **703.87** |\\n\\n**Table V:** Quantitative comparisons with MimicMotion on A\\u00b2Bench with the rescaled pose setting. The best results for each column are **bold**.\"}",
"{\"comment\": \"Dear authors,\\n\\nThanks for your effort in the rebuttal. I have carefully read all the responses, sup mat, as well as the comments and responses of other reviewers. Most of my questions have been addressed by the rebuttal, and I choose to remain with the current rating. The reason for not raising is that this work is an extension of the prior works in this field. Considering the standard of this conference, its contributions are not significant enough.\"}",
"{\"title\": \"Response for *Further question about comment 3.*\", \"comment\": \"Thanks. Since we are unable to update the PDF to include the visual results at this stage, we describe them as clearly as possible in text. If there are any misunderstandings, please point them out.\\n\\n**First**, we have included the distribution statistics of the static values for the augmentation pose pool (as mentioned in the *Response for the further question about selection of $I^p_{anchor}$*). **Then**, to investigate the model's performance on driven videos with sufficiently large differences from these augmentation strategies, we collect such cases as input: The driven video features the pose images with a very high body but an extremely slim physique ( the height/width ratio > f). The arms are quite short (< c), while the legs are long (>b). Our method still successfully animate the reference image with the motion of the driven video.\\n\\nThe model maintains this robustness because it learns the local motion patterns. As the reviewer mentioned, although cases like \\\"*leg(longer than b) + arm(shorter than c) + ratio(far outside the range e to f)*\\\" are not seen in the training data, the model learns to recognize the pixel changes when a part, like the arm, is significantly shortened. The same applies to the legs and the ratio. When these components are combined, the diffusion model is able to handle them properly. Additionally, for extreme cases, such as very long arms, there is always a pixel boundary (*i.e.*, the pixel boundary of each frame or image). However, our EPI covers most of this range, and any missing parts can be addressed by the model\\u2019s inherent generative capability and the IPI.\\n\\nIf possible, we will add the visual results and analysis to the final supplementary materials, which is limited by current rules. Thanks for your suggestions.\"}",
"{\"comment\": \"Dear Reviewer ESK8,\\n\\nThank you for dedicating your valuable time to review our work and for carefully reading our responses, supplementary materials, as well as the comments and responses of other reviewers.\\n\\nWe are pleased to know that our responses have addressed your questions.\\n\\nIf you have any further questions or concerns, please do not hesitate to reach out to us for further discussion.\\n\\nBest regards,\\n\\nAnimate-X authors\"}",
"{\"comment\": \"Thank you for your response and further discussion.\\n\\nThe anchor poses are chosen from the entire training set. To address your next question about \\u201c*If they are randomly selected from the entire training set, how does the static distribution of the rescaling ratio (e.g., the shoulder length of $I^p_{anchor}$ divided by the shoulder length of $I^p$) look like?*\\u201d we conducted the following statistical analysis.\\n\\nFirst, we randomly sampled a driven pose $I^p$ and then traversed the entire pose pool, treating each pose in the pool as an anchor pose to calculate the rescaling ratio. We repeated this process 10 times. Finally, we divided the range from 0.001 to 10 into 10 intervals, counting the proportion of rescaling ratios that fell within each interval. In addition to the shoulder length that you mentioned, we also analyzed the proportions of other important parts like body length, upper arm length, lower arm length, upper leg length, lower leg length. \\n\\n| Interval | Shoulder Lenght | Body Length | Upper Arm Length | Lower Arm length | Upper Leg Length | Lower Leg length |\\n|--------------|-----------------|-------------|------------------|------------------|------------------|------------------|\\n| [0.001, 0.1) | 0.19% | 0.14% | 0.05% | 0.08% | 0.05 | 0.81% |\\n| [0.1, 0.3) | 1.52% | 5.73% | 4.04% | 3.22% | 0.59% | 4.60% |\\n| [0.3, 0.5) | 12.21% | 18.57% | 15.28% | 7.63% | 4.26% | 5.65% |\\n| [0.5, 0.7) | 15.33% | 16.93% | 12.97% | 7.54% | 12.02% | 9.61% |\\n| [0.7, 1.0) | 20.07% | 18.48% | 17.15% | 11.35% | 24.86% | 19.53% |\\n| [1.0, 1.5) | 22.09% | 18.63% | 17.56% | 15.38% | 27.90% | 24.89% |\\n| [1.5, 2) | 10.07% | 8.34% | 7.93% | 11.73% | 14.31% | 14.47% |\\n| [2.0, 3.0) | 9.75% | 6.52% | 7.73% | 16.19% | 11.83% | 15.28% |\\n| [3.0, 6.0) | 6.33% | 6.28% | 10.93% | 18.40% | 2.73% | 4.30% |\\n| [6.0, 10.0) | 2.43% | 0.37% | 6.37% | 8.47% | 1.45% | 0.85% |\\n\\nIt can be observed that the overall distribution covers a wide range (from 0.001 to 10.0), which allows the model to learn poses of various characters, encompassing non-human subjects. We will include the above statistical information in the supplementary materials. Thank you for the valuable comments.\\n\\n\\nIf you have any further questions or concerns, please do not hesitate to reach out to us for further discussion.\", \"title\": \"Response for the further question about selection of $I^p_{anchor}$\"}",
"{\"title\": \"Further question about comment 3.\", \"comment\": \"My question in comment 3 is related to the question in comment 1. In comment 1, I wanted to understand the pose statistics in the augmentation pool, and in this comment, I want to know whether Animate-X can only handle poses from a specific domain (which might be more diverse than previous methods).\\n\\nFor example, if all poses in the augmentation pool have legs ranging from a to b pixels and arms from c to d pixels, with height/width ratios between e and f, what would happen if the user provides a driving video where the character\\u2019s legs are much longer than b, the arms are shorter than c, and the height/width ratio is far outside the range e to f?\\n\\nSince most characters in the $A^2$ Bench have a similar height/width ratio, the author(s) should provide the static values for the augmentation pose pool and include some visualization results demonstrating the model\\u2019s robustness on driving videos where the poses do not lie within the augmentation pose pool.\"}",
"{\"comment\": \"Thank you for your response, the reviewer acknowledges that the responses to W1 and W2 are satisfactory and address the concerns adequately.\\n\\nHowever, the responses for W3 and W5 remain insufficient to fully address my concerns:\", \"regarding_w3\": \"The use of RGB patches has already been proposed in existing portrait animation methods, such as X-Portrait and Meg-Actor. However, these methods are only capable of learning appearance-related motion within the human in-domain, leading to significant content leakage issues and a lack of adaptability to out-domain scenarios.\\nTherefore, I believe it is unreasonable to introduce appearance feature learning in this context, as the appearance features of each character are distinct, and the corresponding appearance-relative motion features also differ.\", \"regrading_w5\": \"The effectiveness of pose image augmentation has already been demonstrated in MimicMotion, which diminishes the insight provided by this paper. Perhaps an ablation study comparing the data augmentation methods in this paper and MimicMotion could be provided?\"}",
"{\"comment\": \"Dear Reviewer mbHE,\\n\\nThank you again for the great efforts and valuable comments. We hope you find the response satisfactory. As the discussion phase is about to close, we are eagerly looking forward to hearing from you regarding any further feedback. We will be more than happy to address any additional concerns you may have.\\n\\nBest,\\n\\nAnimate-X Authors\"}",
"{\"comment\": \"Thanks for your feedbacks which address most of my concerns. However, I am still disagree with that T2I + I2V is the optimal way to constitute the benchmark for character animation task. Utilizing this approach for training data is acceptable; however, it is not particularly appropriate for benchmarking purposes as we have more higher standard, that is, the ground-truth, for benchmark samples. The authors give many quantitative values to demonstrate the feasibleness, but the values sometimes fail to align with human preference. Specifically, the provided videos in the benchmark show some undesirable artifacts (blurring, twisting, etc.) around the hands and feet, and tend to show similar motion patterns without complex scenarios due to the limitation of current I2V models. I think a better way of creating a character benchmark is to create 3D models and render them with predefined actions with 3D tools such as Blender, Maya. Of course, it is more laborious, expensive and requires expert skills.\\n\\nAnyway, I will raise my score to 8 based on the explanation from the authors for the good performance, the writing and the design of a pose pool.\"}",
"{\"title\": \"Response (2)\", \"comment\": \"**Comment 3: The explicit pose indicator is a little bit confusing because I think this module is an augmentation of the driving pose sequences. Therefore, the novelty of the proposed method is not very significant. It is reasonable that the augmentation can break the strong correspondence between the driving video and motion representation. What is the advantage of this training time rescale augmentation and over the test time pose alignment (Answer 3.1)? Are there any ablation studies about this? (Answer 3.2)**\\n\\n**Answer 3.1:** The advantages of training time rescale augmentation over the test time alignment are as follows:\\n- **1. Generalization for Characters Without Extractable Poses:** For reference images with structures significantly different from human skeletons, such as the limb-less fairy shown in Figure 1 (original submission), pose extraction using DWPose is not feasible, which is because DWPose is specifically designed for processing human poses. Consequently, pose alignment at test time cannot be performed, making the diffusion model challenging to generate reasonable videos. In contrast, training time rescale augmentation enables the diffusion model to learn how to handle misaligned reference and driven poses, enhancing its robustness and generalization. In this way, Animate-X can handle scenarios where poses cannot be extracted from the reference image, as it eliminates the need for pose alignment between the reference and driven pose images during inference.\\n- **2. Reduced Dependency on Strict Pose Alignment:** Even when pose alignment is available at test time, the results often rely heavily on precise alignment. For example, if the aligned pose differs in arm length from the reference image (*e.g.*, a longer arm), the generated result will reflect this discrepancy, compromising identity preservation. In contrast, rescale augmentation during training reduces the model\\u2019s dependence on strict pose alignment, ensuring that even with imperfect or absent alignment, the generated results can still effectively preserve identity information.\\n- **3. Simpler Test-Time Workflow and Faster Inference:** For example, animating 100,000,000 reference images with a single driven pose using previous methods would require extracting the pose for each of the 100,000,000 reference images, followed by an equal number of strict pose alignment operations. In contrast, our method removes the need for these alignment operations, significantly reducing inference time and simplifying the test-time process.\\n\\n**Answer 3.2:** We have conducted extensive ablation experiments for different pairs of pose transformations in EPI, as detailed in Appendix D.4 and **Table X**. The results show that each pose transformation improves performance compared to the scenarios without augmentation, confirming the effectiveness of the augmentation operation in enhancing the model's performance.\\n\\n| **Method** | **PSNR*** \\u2191 | **SSIM** \\u2191 | **L1** \\u2193 | **LPIPS** \\u2193 | **FID** \\u2193 | **FID-VID** \\u2193 | **FVD** \\u2193 |\\n|---------------------------|---------------|--------------|------------------|---------------|-----------------|----------------|-----------------|\\n| w/o Add in EPI | 13.28 | 0.442 | 1.56E-04 | 0.459 | 34.24 | 52.94 | 804.37 |\\n| w/o Drop in EPI | 13.36 | 0.441 | 1.94E-04 | 0.458 | *26.65* | 44.55 | 764.52 |\\n| w/o BS in EPI | 13.27 | 0.443 | 1.08E-04 | 0.461 | 29.60 | 56.56 | 850.17 |\\n| w/o NF in EPI | *13.41* | *0.446* | 1.82E-04 | 0.455 | 29.21 | 56.48 | 878.11 |\\n| w/o AL in EPI | 13.04 | 0.429 | *1.04E-04* | 0.474 | 27.17 | *33.97* | 765.69 |\\n| w/o Rescalings in EPI | 13.23 | 0.438 | 1.21E-04 | 0.464 | 27.64 | 35.95 | *721.11* |\\n| w/o Realign in EPI | 12.27 | 0.433 | 1.17E-04 | *0.434* | 34.60 | 49.33 | 860.25 |\\n| w/o EPI | 12.63 | 0.403 | 1.80E-04 | 0.509 | 42.17 | 58.17 | 948.25 |\\n| **Animate-X** | **13.60** | **0.452** | **1.02E-04** | **0.430** | **26.11** | **32.23** | **703.87** |\\n\\n**Table X:** Quantitative results of the ablation study. The best and second-best results for each metric are **bold** and *italicized*, respectively.\"}"
]
} |
1Iuw1jcIrf | MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code | [
"Zimu Lu",
"Aojun Zhou",
"Ke Wang",
"Houxing Ren",
"Weikang Shi",
"Junting Pan",
"Mingjie Zhan",
"Hongsheng Li"
] | Code has been shown to be effective in enhancing the mathematical reasoning abilities of large language models due to its precision and accuracy. Previous works involving continued mathematical pretraining often include code that utilizes math-related packages, which are primarily designed for fields such as engineering, machine learning, signal processing, or module testing, rather than being directly focused on mathematical reasoning. In this paper, we introduce a novel method for generating mathematical code accompanied with corresponding reasoning steps for continued pretraining. Our approach begins with the construction of a high-quality mathematical continued pretraining dataset by incorporating math-related web data, code using mathematical packages, math textbooks, and synthetic data. Next, we construct reasoning steps by extracting LaTeX expressions, the conditions needed for the expressions, and the results of the expressions from the previously collected dataset. Based on this extracted information, we generate corresponding code to accurately capture the mathematical reasoning process. Appending the generated code to each reasoning step results in data consisting of paired natural language reasoning steps and their corresponding code. Combining this data with the original dataset results in a 19.2B-token high-performing mathematical pretraining corpus, which we name MathCode-Pile. Training several popular base models with this corpus significantly improves their mathematical abilities, leading to the creation of the MathCoder2 family of models. All of our data processing and training code is open-sourced, ensuring full transparency and easy reproducibility of the entire data collection and training pipeline. | [
"large language model",
"mathematical reasoning",
"continued pretraining"
] | Accept (Spotlight) | https://openreview.net/pdf?id=1Iuw1jcIrf | https://openreview.net/forum?id=1Iuw1jcIrf | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vQ7DPRcYhB",
"tW5Cren1gB",
"rs921tBM1K",
"m7a9l3v4Dd",
"gngTofi0vZ",
"ci74qgpKhH",
"OyUoz9BF0J",
"OA6rsUnbKA",
"NAimN9yNDb",
"MCK0blSBL8",
"KrOkiJtY8p",
"G7valsf3k6",
"8L4DjC1XWL",
"3tzhvNCiB3",
"3haoqi3Bz0"
],
"note_type": [
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment"
],
"note_created": [
1732538562168,
1730688250578,
1734773294787,
1732064620583,
1732235712473,
1732209279196,
1732532654982,
1732064729868,
1732065315452,
1730695569483,
1732064936038,
1732064986734,
1730594371724,
1737523432412,
1732065138251
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1035/Reviewer_G66U"
],
[
"ICLR.cc/2025/Conference/Submission1035/Area_Chair_kzKu"
],
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1035/Reviewer_G66U"
],
[
"ICLR.cc/2025/Conference/Submission1035/Area_Chair_kzKu"
],
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1035/Reviewer_Yuwf"
],
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1035/Reviewer_Le5X"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission1035/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for the time and effort.\", \"comment\": \"Dear Area Chair,\\n\\nThank you for the time and effort you have dedicated to overseeing the review process for our work. We truly appreciate you reminding the reviewers to read our rebuttal and ensuring that our response received the necessary attention. Your support and guidance throughout this process mean a great deal to us.\\n\\nSincerely, \\nThe Authors\"}",
"{\"summary\": \"The paper \\\"MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code\\\" explores enhancing the mathematical reasoning capabilities of LLMs through continued pretraining on a novel dataset called MathCode-Pile. This dataset is constructed from various sources, including math-related web data, textbooks, and synthetic data. A key contribution is the generation of paired natural language reasoning steps and Python code, aimed at improving the alignment between mathematical reasoning and executable code. The authors demonstrate significant improvements in mathematical reasoning benchmarks such as MATH and GSM8K, using models fine-tuned with MathCode-Pile. The paper also emphasizes the open-source nature of their data processing and training pipeline.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Originality: The paper introduces a novel method of generating and pairing Python code with natural language reasoning steps, enhancing the mathematical reasoning capabilities of large language models.\", \"Quality of Dataset: The MathCode-Pile dataset, comprising 19.2B tokens, is a significant contribution, demonstrating meticulous curation from diverse sources like web data, math textbooks, and synthetic examples.\", \"Significant Performance Gains: The use of this dataset leads to notable improvements across various models, including Llama-3-8B, DeepSeekMath-7B, and Code-Llama-7B, especially on benchmarks like MATH and GSM8K.\", \"Detailed Methodology: The process of extracting LaTeX expressions, conditions, and results to generate corresponding Python code is well-documented, offering transparency and reproducibility.\", \"Open-Source Commitment: The release of data processing and training code enhances the research community's ability to validate and build upon this work.\"], \"weaknesses\": [\"Generalizability of Code Generation: The method\\u2019s applicability to more abstract or advanced mathematical domains is unclear, particularly beyond high-school-level math.\", \"Evaluation Uncertainty: It is ambiguous whether the generated Python code is executed during benchmark evaluations or merely used for pretraining, leaving questions about its practical impact.\", \"Scope Limitation: The focus on grade-school-level mathematics is not explicitly emphasized, potentially misleading readers about the dataset\\u2019s broader applicability.\", \"Ablation Study Depth: While the ablation studies show the value of the synthesized code, further exploration into the necessity of aligning reasoning steps with code versus treating them as independent could provide deeper insights.\"], \"questions\": [\"Code Execution in Evaluation: Is the Python code generated and executed during benchmark evaluations? Clarifying this would help to understand the role of Tool-Integrated Reasoning in the observed performance improvements.\", \"Generalization to Formal Proofs: Can the method be extended to generate formal proofs in languages like Lean or Coq? Specifically, how well does the approach handle abstract reasoning steps that require formal verification, which might be better suited to proof assistants rather than executable Python code?\", \"Independent Reasoning Steps: Would separating reasoning steps and corresponding code into independent examples still yield significant improvements? Such an ablation could help assess the criticality of their alignment in the dataset.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper proposes a novel method for generating paired pretraining data that combines mathematical code with corresponding reasoning steps to improve LLMs' mathematical reasoning capabilities through continued pretraining. Using this approach, the authors introduce MathCode-Pile, which combines their generated data with existing datasets to create a comprehensive 19.2B-token mathematical pretraining corpus.\\n\\nTheir experimental evaluation, conducted across LLMs ranging from 2-8B parameters that are continued pretrained on MathCode-Pile, demonstrates the effectiveness of their approach in enhancing mathematical reasoning capabilities. The reviewers particularly valued the extensive experiments across various LLMs. There is consensus among reviewers that the proposed data generation method is both effective and yields significant improvements in performance. \\n\\nThe authors have adequately addressed most reviewer concerns, including questions about potential impacts on LLMs' capabilities in other domains and the method's applicability to other programming languages. While a more detailed analysis of potential data leakage issues would have been beneficial, this limitation is not unique to this work but rather a broader challenge in the current era of LLM pretraining. \\n\\nOverall, this is a valuable contribution to the field. I agree with the reviewers that the work's comprehensive evaluation and impressive performance improvements merit publication at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}",
"{\"title\": \"Response to Official Review by Reviewer Yuwf (1/2)\", \"comment\": \"Thank you for taking the time to review our work and for providing your insightful feedback.\\n\\n**Q1:** Python was chosen for code snippets; is it possible to use specialized math software language instead (e.g., Mathematica)? This is not a direct limitation of this paper, but a possible future direction.\\n\\n**A1:** Thank you for your insightful suggestion. Using math software languages such as Mathematica and MATLAB paired with natural language reasoning is indeed a possible future direction. \\n\\nWe chose Python for code snippets because it has wide spread accessibility, is easier to use, and more suitable for large-scale execution. Applying our method to specialized math software languages could further enhance mathematical reasoning capabilities, opening up new possibilities for improvement. To provide a glimpse into this direction, we have included examples of model-generated synthetic Mathematica and MATLAB code paired with natural language reasoning below. Thank you once again for your thoughtful feedback.\\n\\n**Example 1 of Mathematica:**\", \"computation_1\": \"Intersection of Probabilities of Two Dice\", \"conditions_needed\": \"1. We have two dice, each with $n$ sides.\\n2. We want to find the probability that the maximum of the two dice is equal to some value $a$.\", \"computation_expression\": \"$$P(X = a) = P(X \\\\le a) - P(X \\\\le a-1) = \\\\frac{a^2}{n^2} - \\\\frac{(a-1)^2}{n^2}$$\", \"computation_result\": \"The probability that the maximum of the two dice is equal to $a$ is $\\\\frac{a^2}{n^2} - \\\\frac{(a-1)^2}{n^2}$.\", \"mathematica_code_snippet\": \"```mathematica\\n(* Define a function for the probability that the maximum is exactly equal to a *)\\nprobabilityMaxEqual[a_, n_] := (a^2)/n^2 - ((a - 1)^2)/n^2;\\n\\n(* Example for two dice with 6 sides and max equal to 3 *)\\nprobabilityMaxEqual[3, 6]\\n```\", \"computation_2\": \"Probability Distribution Function of the Maximum of Two Dice\"}",
"{\"title\": \"Thank you very much for your response.\", \"comment\": \"Thank you for acknowledging our rebuttal efforts. We sincerely appreciate the time and effort you dedicated to reviewing our work and providing thoughtful feedback. Your suggestions have been invaluable in helping us improve our project.\\n\\nIf you have any additional questions or suggestions, please don\\u2019t hesitate to reach out. We would be happy to provide further clarification or discussion.\\n\\nWarm regards, \\nThe Authors\"}",
"{\"comment\": \"I would like to express my gratitude to the authors for spending the effort to address my questions in detail.\\n\\nThe authors acknowledged the current limitations of their dataset and model, which focus on problems up to college level and provided empirical results demonstrating improved formal reasoning capabilities on the minif2f_isabelle benchmark, which adds credibility to their claim that MathCode-Pile enhances reasoning in formal languages. The inclusion of Lean-generated examples further supports the potential extension of this work to formal proof systems.\\n\\nThe authors also clarified that Python code is not executed during benchmark evaluations, ensuring a fair comparison with baselines.\\n\\nBy explicitly addressing the focus on grade-school to college-level mathematics and acknowledging that formal theorem proving is outside the scope of this study, the authors are avoiding potential misunderstandings about the dataset's broader applicability.\\n\\nThe authors conducted an additional ablation study to explore the impact of aligning reasoning steps with code. The results clearly show that the alignment contributes to improved performance, reinforcing the design choice. The addition of these results to the paper demonstrates responsiveness to feedback and enhances the rigour of the evaluation.\\n\\nOverall, the authors have responded thoroughly and addressed all my concerns and questions effectively. I appreciate their transparency in acknowledging limitations and their effort to provide new empirical evidence. These additions strengthen the paper, and I consider the matter closed. I will not change my score.\"}",
"{\"title\": \"Action Required: Respond to Author Rebuttals - Nov 27\", \"comment\": \"Dear ICLR Reviewers,\\n\\nThe author discussion phase is ending soon. Please promptly review and respond to author rebuttals for your assigned papers. Your engagement is critical for the decision-making process.\", \"deadlines\": \"- November 26: Last day for reviewers to ask questions to authors.\\n- November 27: Last day for authors to respond to reviewers.\\n- November 28 - December 10: Reviewer and area chair discussion phase.\\n\\nThank you for your timely attention to this matter.\\n\\nBest,\\n\\nAC\"}",
"{\"title\": \"Response to Official Review by Reviewer Yuwf (2/2)\", \"comment\": \"**Example 1 of MATLAB:**\", \"computation_1\": \"Product Rule for Derivatives\", \"conditions_needed\": \"1. The function is $f(x) = e^{2x} \\\\cdot \\\\ln(x)$.\", \"computation_expression\": \"$$\\n\\\\frac{d}{dx} (e^{2x} \\\\cdot \\\\ln(x)) = e^{2x} \\\\cdot \\\\frac{d}{dx} (\\\\ln(x)) + \\\\ln(x) \\\\cdot \\\\frac{d}{dx} (e^{2x})\\n$$\", \"computation_result\": \"$$\\ne^{2x} \\\\cdot \\\\left( 2 \\\\ln(x) + \\\\frac{1}{x} \\\\right)\\n$$\", \"matlab_code_snippet\": \"```matlab\\n% Define the population distribution (uniform distribution for simplicity)\\nf = @(x) 1; % PDF of uniform distribution\\nF = @(x) x; % CDF of uniform distribution\\nk = 5; % Number of observations below the median (n = 2k - 1)\\n\\n% Calculate the sampling distribution of the sample median\\nx = 0:0.01:1;\\nf_sample_median = (factorial(2*k-1) / (factorial(k-1)^2)) .* f(x) .* (F(x) .* (1 - F(x))) .^ (k-1);\\n\\n% Plot the result\\nplot(x, f_sample_median);\\nxlabel('x');\\nylabel('f_{X_{(k)}}(x)');\\ntitle('Sampling Distribution of the Sample Median');\\n```\\n\\n**Example 2 of MATLAB:**\", \"computation_2\": \"Final Derivative of $f(x) = e^{2x} \\\\cdot \\\\ln(x)$\", \"computation_3\": \"Sampling Distribution of the Sample Median\", \"mathematica_code_snippet\": \"```mathematica\\n(* Define the symbol x *)\\nClear[x]\\n\\n(* Define the function f(x) = e^(2x) * Log(x) *)\\nf = Exp[2 x] * Log[x];\\n\\n(* Compute the derivative of f(x) *)\\nderivative = D[f, x]\\n```\"}",
"{\"title\": \"Response to Official Review by Reviewer Le5X\", \"comment\": \"Thank you for your thoughtful review and for highlighting areas in need of improvement. Your feedback is invaluable in helping us improve our project.\\n\\n**Q1:** The paper lacks an analysis of potential data leakage between MathCode-Pile and evaluation benchmarks, which could artificially inflate model performance.\\n\\n**A1:** Thank you for your comment. As mentioned in the fourth paragraph of Section 2.2, to avoid benchmark contamination (or leakage), we filter samples that significantly overlap with questions from the benchmark datasets used in evaluation. Similar to GPT-3 [1] and Llama2 [2], we apply exact matching to remove identical samples and further use 13-gram deduplication (with a condition that the Jaccard similarity should be larger than 0.6) to eliminate additional samples that might cause contamination.\\n\\nWe also apply the n-gram testing, as demonstrated in the table below. The overlap percentages for various n-grams are quite low, and the overlap becomes 0.00% when n-grams are 13. This analysis has been added to Appendix F.\\n\\n| n-grams | 3 | 4 | 5 | 6 | 7 | 8 | 13 |\\n|------------------|---------|---------|---------|---------|---------|---------|---------|\\n| Overlap Ratio (%)| 0.21% | 0.12% | 0.06% | 0.03% | 0.02% | 0.01% | 0.00% |\\n\\n[1] Brown, Tom B. \\\"Language models are few-shot learners.\\\" arXiv preprint arXiv:2005.14165 (2020).\\n\\n[2] Touvron, Hugo, et al. \\\"Llama 2: Open foundation and fine-tuned chat models.\\\" arXiv preprint arXiv:2307.09288 (2023).\\n\\n**Q2:** I am interested about whether the MathCode-Pile\\u2019s strong focus on mathematical reasoning might impact the model\\u2019s performance in non-mathematical domains. For example, whether this dataset would enhance the model\\u2019s general coding abilities beyond math-focused tasks.\\n\\n**A2:** Thank you for your suggestion. We tested the MathCoder2 models on HumanEval and MBPP, two representative benchmarks for evaluating models' general coding abilities, using the EvalPlus framework [1]. HumanEval+ and MBPP+ are extended versions of HumanEval and MBPP that include additional test samples, as described in [2]. The pass@1 accuracies are presented in the table below.\\nAs shown in the results, training on MathCode-Pile improves the performance of Llama3-8B, DeepSeekMath-7B, and Mistral-7B on general coding benchmarks. The performance of MathCoder2-CodeLlama-7B is comparable to CodeLlama-7B, which is understandable since CodeLlama is specifically trained for code generation. These findings highlight that MathCode-Pile can enhance general coding abilities beyond math-specific tasks, particularly for models not explicitly trained for code generation.\\n\\n|Model|HumanEval|HumanEval+|MBPP|MBPP+|\\n|---|---|---|---|---|\\n|Llama3-8B\\t|40.2\\t|35.4\\t|61.9\\t|52.1|\\n|**MathCoder2-Llama3-8B**\\t|**51.8**\\t|**43.3**\\t|**61.9**\\t|**52.1**|\\n|DeepSeekMath-7B\\t|36.0\\t|28.7\\t|64.8\\t|52.9|\\n|**MathCoder2-DeepSeekMath-7B**\\t|**36.6**\\t|**32.3**\\t|**66.7**\\t|**54.8**|\\n|Mistral-7B\\t|29.3\\t|23.8\\t|51.3\\t|40.5|\\n|**MathCoder2-Mistral-7B**\\t|**39.6**\\t|**34.1**\\t|**54.5**\\t|**46.8**|\\n|CodeLlama-7B\\t|37.8\\t|**35.4**\\t|**59.5**\\t|46.8|\\n|MathCoder2-CodeLlama-7B\\t|**38.4**\\t|32.3\\t|58.5\\t|**47.4**|\\n\\nTo evaluate how MathCode-Pile influences the general abilities of LLMs, we tested the MathCoder2 models on Hellaswag, PIQA, and Winogrande using the lm-evaluation-harness framework [3]. As shown in the table below, training on MathCode-Pile has a slight impact on the performance of general-purpose models such as Llama3-8B and Mistral-7B, likely because MathCode-Pile consists entirely of math-related data. The accuracy of specialized models, such as DeepSeekMath-7B and CodeLlama-7B, remains similar before and after training.\\nWe have included this discussion in Appendix G of the revised paper. As shown in the second table below, other specialized models like CodeLlama and DeepSeekMath experience a slight decrease in performance on general benchmarks. In future work, we plan to incorporate general-purpose training data and adjust the ratio of math-related data to mitigate its impact on the general abilities of LLMs.\\n\\n|Model\\t|Hellaswag\\t|PIQA\\t|Winogrande|\\n|---|---|---|---|\\n|Llama3-8B\\t|79.2\\t|81.0\\t|73.4|\\n|**MathCoder2-Llama3-8B**\\t|75.9\\t|78.1\\t|71.7|\\n|DeepSeekMath-7B\\t|66.4\\t|74.7\\t|64.6|\\n|**MathCoder2-DeepSeekMath-7B**\\t|66.9\\t|74.0\\t|63.1|\\n|Mistral-7B\\t|81.1\\t|82.0\\t|73.9|\\n|**MathCoder2-Mistral-7B**\\t|78.1\\t|78.0\\t|72.3|\\n|CodeLlama-7B\\t|62.9\\t|72.5\\t|64.7|\\n|**MathCoder2-CodeLlama-7B**\\t|62.8\\t|72.3\\t|63.7|\\n\\n\\n|Model\\t|Hellaswag\\t|PIQA\\t|Winogrande|\\n|---|---|---|---|\\n|Llama-7B\\t| 76.1\\t|79.8\\t| 70.1|\\n|**CodeLlama-7B** |62.9\\t|72.5\\t|64.7|\\n|DeepSeek-7B\\t|75.4\\t|79.2\\t|70.5|\\n|**DeepSeekMath-7B**|66.4\\t|74.7\\t|64.6|\\n\\n[1] https://github.com/evalplus/evalplus\\n\\n[2] Liu, Jiawei, et al. \\\"Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[3] https://github.com/EleutherAI/lm-evaluation-harness\"}",
"{\"summary\": \"This paper presents a novel approach for enhancing mathematical reasoning in large language models (LLMs). Unlike previous models that used math-related code without detailed explanations, MathCoder2 generates mathematical code paired with natural language reasoning. This process involves filtering a large math-related dataset from web pages, synthetic sources, code, and textbooks to build a high-quality corpus called MathCode-Pile.\\n\\nThis dataset consists of 19.2 billion tokens and includes LaTeX-extracted mathematical expressions, conditions, results, and Python code to capture the underlying reasoning. MathCoder2 uses this corpus to significantly improve performance on various mathematical benchmarks, achieving results competitive with state-of-the-art models. Moreover, the MathCoder2 framework is fully open-source, which supports reproducibility and transparency in model training and data processing. This work sets a foundation for future research by focusing on reasoning capabilities through detailed code integration.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"MathCoder2\\u2019s MathCode-Pile corpus is rigorously curated and filtered from diverse math-related sources, including web data, synthetic data, specialized code, and textbooks. This ensures relevance, reduces noise, and provides a comprehensive dataset tailored specifically for mathematical reasoning, which is essential for pretraining LLMs in this area.\\n\\nMathCoder2 demonstrates significant gains on multiple mathematical reasoning benchmarks, outperforming comparable models across different tasks. The improvement underscores the effectiveness of continued pretraining on the structured MathCode-Pile corpus and shows MathCoder2's potential for real-world applications in math-intensive fields.\", \"weaknesses\": \"There are no major weaknesses.\", \"questions\": \"Python was chosen for code snippets; it it possible to use specialized math software language instead (e.g., Mathematica)? This is not a direct limitation of this paper, but a possible future direction.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Official Review by Reviewer G66U (1/3)\", \"comment\": \"Thank you for the time and effort you have given to reviewing our work. We greatly value your insights and suggestions, and are happy to address your questions and clarify any concerns in the following responses.\\n\\n**Q1:** Generalizability of Code Generation: The method\\u2019s applicability to more abstract or advanced mathematical domains is unclear, particularly beyond high-school-level math. \\nCan the method be extended to generate formal proofs in languages like Lean or Coq? Specifically, how well does the approach handle abstract reasoning steps that require formal verification, which might be better suited to proof assistants rather than executable Python code?\\n\\n**A1:** Thank you for your valuable comment. It is true that our work primarily focuses on mathematical problems ranging from grade-school word problems (GSM8K) to challenging high-school competition problems (MATH) and college-level problems (OCW), demonstrating notable improvements on a wide range of mathematical benchmarks.\\n\\nWe also test our model on the popular formal language benchmark of minif2f_isabelle. As shown in the table below, on Llama3-8B and DeepSeekMath-7B, the accuracy on minif2f_isabelle improves after training with MathCode-Pile. This demonstrates that MathCode-Pile also improves models\\u2019 ability to do formal reasoning. \\n\\n| Model Name | minif2f_isabelle |\\n|----------------------------|------------------|\\n| Llama3-8B | 17.2% |\\n| MathCoder2-Llama3-8B | 22.5% |\\n| DeepSeekMath-7B | 21.3% |\\n| MathCoder2-DeepSeekMath-7B | 21.7% |\\n\\nFormal proofs, however, are beyond the scope of this work. Currently, our dataset does not contain any formal proof data. We have added this as a limitation and potential future work in Section 5. In the future, we plan to extend our method to generating formal proofs in languages like Lean, Coq, and Isabelle to further improve the formal reasoning ability of MathCoder2. Additionally, we have included some examples of Lean code generated using our method below. Thank you once again for your valuable feedback and support.\\n\\n**Example 1 of Lean:**\", \"theorem_1\": \"The notation $\\\\frac{\\\\partial^2 z}{\\\\partial x \\\\partial y}$ represents the second derivative of $z$ with respect to $y$ and then $x$.\", \"conditions\": \"1. The function $z$ is a function of $x$ and $y$, i.e., $z = f(x, y)$.\\n2. The partial derivatives of $z$ with respect to $x$ and $y$ exist and are continuous.\\n3. The function $z$ is at least twice differentiable, meaning that the second partial derivatives $\\\\frac{\\\\partial^2 z}{\\\\partial x \\\\partial y}$ and $\\\\frac{\\\\partial^2 z}{\\\\partial y \\\\partial x}$ exist.\", \"proof_process\": \"We want to prove that $\\\\frac{\\\\partial^2 z}{\\\\partial x \\\\partial y}$ represents the second derivative of $z$ with respect to $y$ and then $x$. Here\\u2019s a step-by-step breakdown:\\n\\n1. First, recall the definition of a partial derivative. The partial derivative of $z = f(x, y)$ with respect to $x$ is the rate of change of $f(x, y)$ as $x$ changes, while keeping $y$ constant:\\n $$\\n \\\\frac{\\\\partial f}{\\\\partial x} = \\\\lim_{\\\\Delta x \\\\to 0} \\\\frac{f(x + \\\\Delta x, y) - f(x, y)}{\\\\Delta x}\\n $$\\n2. Next, consider the second mixed partial derivative $\\\\frac{\\\\partial^2 z}{\\\\partial x \\\\partial y}$. This means we first take the derivative of $f(x, y)$ with respect to $x$, holding $y$ constant, and then we differentiate this result with respect to $y$.\", \"formally\": \"$$\\n \\\\frac{\\\\partial^2 f}{\\\\partial x \\\\partial y} = \\\\frac{\\\\partial}{\\\\partial y} \\\\left( \\\\frac{\\\\partial f}{\\\\partial x} \\\\right)\\n $$\\n We must show that this is consistent with the notation $\\\\frac{\\\\partial^2 f}{\\\\partial x \\\\partial y}$.\\n\\n3. Apply the definition of partial derivatives: In the definition of mixed partials, we can interchange the order of differentiation if the second partial derivatives are continuous (Clairaut's Theorem). Thus:\\n $$\\n \\\\frac{\\\\partial^2 f}{\\\\partial x \\\\partial y} = \\\\frac{\\\\partial^2 f}{\\\\partial y \\\\partial x}\\n $$\\n This result holds for continuous second partial derivatives, and it follows directly from the symmetry of second mixed partials in the case of functions that are sufficiently smooth.\", \"lean_code_snippet\": \"```lean\\nimport analysis.calculus\\n\\nvariables {x y : \\u211d} {f : \\u211d \\u2192 \\u211d \\u2192 \\u211d}\\n\\n-- Define the second partial derivative of a function\\ndef second_partial_derivative_x_y (f : \\u211d \\u2192 \\u211d \\u2192 \\u211d) (x y : \\u211d) : \\u211d :=\\n \\u2202 (\\u2202 f x) y\\n\\n-- Lemma showing that the second partial derivative with respect to x then y is the same as y then x\\nlemma partial_derivatives_commute (f : \\u211d \\u2192 \\u211d \\u2192 \\u211d) (x y : \\u211d) :\\n \\u2202\\u00b2 f x y = \\u2202\\u00b2 f y x :=\\nbegin\\n -- Apply the fact that mixed partial derivatives commute for sufficiently smooth functions\", \"have_h\": \"\\u2200 (f : \\u211d \\u2192 \\u211d \\u2192 \\u211d) (x y : \\u211d), continuous (\\u03bb (z : \\u211d), \\u2202 f z y) \\u2192 continuous (\\u03bb (z : \\u211d), \\u2202 f x z),\\n { intro f, intros x y, apply continuous_smoothness_of_partial_derivatives, },\\n -- Now apply Clairaut's Theorem\\n exact h f x y,\\nend\\n```\"}",
"{\"title\": \"Response to Official Review by Reviewer G66U (2/3)\", \"comment\": \"**Example 2 of Lean:**\", \"theorem\": \"The derivative of $e^{2x} \\\\cdot \\\\ln x$ is $e^{2x} \\\\left( 2 \\\\ln x + \\\\frac{1}{x} \\\\right)$.\", \"conditions\": \"1. The derivative of $e^{g(x)}$ is $e^{g(x)} \\\\cdot g'(x)$, i.e., the derivative of the exponential function is the exponential function itself multiplied by the derivative of the exponent.\\n2. The derivative of $\\\\ln x$ is $\\\\frac{1}{x}$, i.e., the derivative of the natural logarithm function is the reciprocal of $x$.\", \"proof_process\": \"We are tasked with proving that the derivative of the product of the two functions $h(x) = e^{2x}$ and $l(x) = \\\\ln x$ follows the product rule for derivatives:\\n\\n1. Start with the product rule for derivatives. The product rule states that if we have two differentiable functions $h(x)$ and $l(x)$, then the derivative of their product is:\\n $$\\n \\\\frac{d}{dx} \\\\left( h(x) \\\\cdot l(x) \\\\right) = h'(x) \\\\cdot l(x) + h(x) \\\\cdot l'(x)\\n $$\\n\\n2. Apply the chain rule to $h(x) = e^{2x}$. The function $h(x) = e^{2x}$ is a composition of functions, so we apply the chain rule:\\n $$\\n h'(x) = \\\\frac{d}{dx} e^{2x} = e^{2x} \\\\cdot \\\\frac{d}{dx}(2x) = 2 e^{2x}\\n $$\\n\\n3. Differentiate $l(x) = \\\\ln x$. The derivative of the natural logarithm function is:\\n $$\\n l'(x) = \\\\frac{d}{dx} \\\\ln x = \\\\frac{1}{x}\\n $$\\n\\n4. Combine the results. Applying the product rule to the functions $h(x) = e^{2x}$ and $l(x) = \\\\ln x$, we get:\\n $$\\n \\\\frac{d}{dx} \\\\left( e^{2x} \\\\cdot \\\\ln x \\\\right) = 2 e^{2x} \\\\cdot \\\\ln x + e^{2x} \\\\cdot \\\\frac{1}{x}\\n $$\\n \\n5. Simplify the expression. Factor out $e^{2x}$:\\n $$\\n \\\\frac{d}{dx} \\\\left( e^{2x} \\\\cdot \\\\ln x \\\\right) = e^{2x} \\\\left( 2 \\\\ln x + \\\\frac{1}{x} \\\\right)\\n $$\\n\\nThus, we've proven that:\\n$$\\n\\\\frac{d}{dx} \\\\left( e^{2x} \\\\cdot \\\\ln x \\\\right) = e^{2x} \\\\left( 2 \\\\ln x + \\\\frac{1}{x} \\\\right)\\n$$\", \"lean_code_snippet\": \"```lean\\nimport tactic\\n\\nvariables {x : \\u211d}\\n\\n-- Lemma for the product rule\\nlemma derivative_product_rule (h l : \\u211d \\u2192 \\u211d) (h' l' : \\u211d \\u2192 \\u211d) :\\n (h * l).derivative = h'.derivative * l + h * l'.derivative :=\\nbegin\\n ext, -- Apply the extensionality tactic to simplify and reason about derivatives\\n simp, -- Simplify the goal using basic simplification rules\\n ring, -- Apply ring tactics to simplify the expression\\nend\\n\\n-- Lemma for the derivative of e^(g(x)) where g is a function\\nlemma derivative_e_pow (g : \\u211d \\u2192 \\u211d) (g' : \\u211d \\u2192 \\u211d) :\\n (e ^ g).derivative = e ^ g * g'.derivative :=\\nbegin\\n ext, -- Apply the extensionality tactic to simplify and reason about derivatives\\n simp, -- Simplify the goal using basic simplification rules\\n ring, -- Apply ring tactics to simplify the expression\\nend\\n\\n-- Lemma for the derivative of ln(x)\", \"lemma_derivative_ln\": \"(ln x).derivative = 1 / x :=\\nbegin\\n ext, -- Apply the extensionality tactic to simplify and reason about derivatives\\n simp, -- Simplify the goal using basic simplification rules\\n ring, -- Apply ring tactics to simplify the expression\\nend\\n\\n-- Final derivative computation for e^(2x) * ln(x)\\nlemma derivative_f (x : \\u211d) :\\n (e ^ (2 * x) * ln x).derivative = e ^ (2 * x) * (2 * ln x + 1 / x) :=\\nbegin\\n -- Use the product rule for the derivative of the product e^(2x) * ln(x)\", \"have_h\": \"(e ^ (2 * x) * ln x).derivative = (e ^ (2 * x)).derivative * ln x + e ^ (2 * x) * (ln x).derivative,\\n by apply derivative_product_rule,\\n \\n -- Apply the chain rule to differentiate e^(2x) and ln(x)\\n have h' : (e ^ (2 * x)).derivative = e ^ (2 * x) * (2 * x).derivative,\\n by apply derivative_e_pow,\\n \\n have h'' : (ln x).derivative = 1 / x,\\n by apply derivative_ln,\\n \\n -- Substitute and simplify the terms\\n rw [h, h', h''],\\n ring, -- Use the ring tactic to simplify the final expression\\nend\\n```\"}",
"{\"summary\": \"This paper proposes MathCode-Pile, a 19.2B-token dataset of math text and Python code. The dataset includes high-quality math-related web content, code with mathematical packages, math textbooks, and synthetic data. In addition, they present MathCoder2, a family of large language models with enhanced mathematical reasoning capabilities over MathCode-Pile.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The author combining symbolic math reasoning with executable code in the dataset, MathCode-Pile, which is noval. This innovative methodology extends prior research, making MathCode-Pile a significant resource for advanced math reasoning tasks.\\n\\n2. The paper is clearly organized, with a well-structured explanation of each step in the MathCode-Pile creation and model evaluation process. Figures and tables also effectively illustrate the overall data pipeline.\\n\\n3. This work has great significance in advancing mathematical reasoning within language models. MathCoder2, using MathCode-Pile, achieves superior results on math benchmarks, demonstrating the potential of code-paired reasoning data.\", \"weaknesses\": \"1. The paper lacks a analysis of potential data leakage between MathCode-Pile and evaluation benchmarks, which could artificially inflate model performance.\", \"questions\": \"I have interested about whether the MathCode-Pile\\u2019s strong focus on mathematical reasoning might impact the model\\u2019s performance in non-mathematical domains. For example, whether this dataset would enhance the model\\u2019s general coding abilities beyond math-focused tasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}",
"{\"title\": \"Response to Official Review by Reviewer G66U (3/3)\", \"comment\": \"**Q2:** Evaluation Uncertainty: It is ambiguous whether the generated Python code is executed during benchmark evaluations or merely used for pretraining, leaving questions about its practical impact. Is the Python code generated and executed during benchmark evaluations? Clarifying this would help to understand the role of Tool-Integrated Reasoning in the observed performance improvements.\\n\\n**A2:** During the evaluation of MathCoder2 models following continued pretraining, we use a text-only format, without utilizing code execution. We chose not to use code to demonstrate that our method improves the general mathematical reasoning abilities of LLMs and ensures a fair comparison with baseline methods.\", \"the_results_of_post_training_include_two_formats\": \"Chain-of-Thought (CoT) and Tool-Integrated-Reasoning (TIR). CoT testing does not involve code generation, while TIR executes the generated Python code and appends the output to the model's generation to provide feedback from the execution. This approach is similar to those described in [1] and [2]. Our continued pretraining enhances the models\\u2019 ability to be fine-tuned for TIR reasoning, as demonstrated in the ablation study presented in Table 7 of the paper.\\n\\n| Base Model | MATH | GSM8K | OCW | Olympiad Bench | SVAMP |\\n|------------------------------|------|-------|------|----------------|-------|\\n| Llama-3-8B | 56.1 | 80.1 | 24.6 | 28.4 | 83.8 |\\n| MathCoder2-Basic-Llama-3-8B | 62.9 | 81.3 | 26.8 | 32.9 | 86.7 |\\n| MathCoder2-Llama-3-8B | **65.1** | **84.5** | **34.6** | **34.4** | **87.9** |\\n\\n\\n[1] Wang, Ke, et al. \\\"Mathcoder: Seamless code integration in llms for enhanced mathematical reasoning.\\\" arXiv preprint arXiv:2310.03731 (2023).\\n\\n[2] Gou, Zhibin, et al. \\\"Tora: A tool-integrated reasoning agent for mathematical problem solving.\\\" arXiv preprint arXiv:2309.17452 (2023).\\n\\n**Q3:** Scope Limitation: The focus on grade-school-level mathematics is not explicitly emphasized, potentially misleading readers about the dataset\\u2019s broader applicability.\\n\\n**A3:** You are correct that our work primarily focuses on mathematical problems ranging from grade-school word problems (GSM8K) to high-school competition problems (MATH) and college-level problems (OCW). Other forms of mathematical reasoning, such as formal theorem proving, are beyond the scope of this study. We have acknowledged this as a limitation and a future direction in Section 5.\\n\\n**Q4:** Ablation Study Depth: While the ablation studies show the value of the synthesized code, further exploration into the necessity of aligning reasoning steps with code versus treating them as independent could provide deeper insights. Would separating reasoning steps and corresponding code into independent examples still yield significant improvements? Such an ablation could help assess the criticality of their alignment in the dataset.\\n\\n**A4:** Following your suggestion, we have added an ablation study of training DeepSeekCoder-1.3B on data with separated reasoning steps and corresponding code, as presented in the \\\"Basic + Separated Text&Code\\\" row in the table below. Separating the reasoning steps and corresponding code reduces performance compared to pairing them together, which demonstrates the effectiveness of our design. This ablation study has also been added to Table 4 in the paper.\\n\\n|Data Composition |Base Model |MATH |GSM8K |SAT |OCW |MMLU-MATH|\\n|---|---|---|---|---|---|---|\\n|Basic + Separated Text&Code |DeepSeekCoder-1.3B|17.0 |22.0 |46.9 |4.8 |25.3 | \\n|Basic + Reasoning-Step&Code |DeepSeekCoder-1.3B |17.8(+0.8) |25.5(+3.5) |59.4(12.5) |5.9(+1.1) |26.1(+0.8)|\"}"
]
} |
1Iu2Yte5N6 | Rapid Selection and Ordering of In-Context Demonstrations via Prompt Embedding Clustering | [
"Kha Pham",
"Hung Le",
"Man Ngo",
"Truyen Tran"
] | While Large Language Models (LLMs) excel at in-context learning (ICL) using just a few demonstrations, their performances are sensitive to demonstration orders. The reasons behind this sensitivity remain poorly understood. In this paper, we investigate the prompt embedding space to bridge the gap between the order sensitivity of ICL with inner workings of decoder-only LLMs, uncovering the clustering property: prompts sharing the first and last demonstrations have closer embeddings, with first-demonstration clustering usually being stronger in practice. We explain this property through extensive theoretical analyses and empirical evidences. Our finding suggests that the positional encoding and the causal attention mask are key contributors to the clustering phenomenon. Leveraging this clustering insight, we introduce Cluster-based Search, a novel method that accelerates the selection and ordering of demonstrations in self-adaptive ICL settings. Our approach substantially decreases the time complexity from factorial to quadratic, saving 92% to nearly 100% execution time while maintaining comparable performance to exhaustive search. | [
"in-context learning",
"order sensitivity",
"LLMs",
"clustering",
"cluster-based search",
"positional encoding",
"attention mask",
"serial-position effect",
"cluster-based search"
] | Accept (Poster) | https://openreview.net/pdf?id=1Iu2Yte5N6 | https://openreview.net/forum?id=1Iu2Yte5N6 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vdjEB14c6p",
"taUMpuQQRe",
"nqLsjP5EML",
"mEqYwEPMv8",
"kWFvqcFYDA",
"bIgDrDmc29",
"bFPLY3zbmU",
"T70gjWOCkB",
"STA5NtBc7B",
"KcRdU5HIyY",
"BpWbJDj81K",
"6BUMyzJosM",
"1kPxfNqeRx"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1737524284157,
1732727972082,
1732789439034,
1733104055939,
1734689677494,
1729838255925,
1730659312434,
1732728692256,
1733104096246,
1732798510294,
1732789411727,
1730356723662,
1733206054270
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13824/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13824/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13824/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13824/Area_Chair_Bk2G"
],
[
"ICLR.cc/2025/Conference/Submission13824/Reviewer_NLJn"
],
[
"ICLR.cc/2025/Conference/Submission13824/Reviewer_Q71R"
],
[
"ICLR.cc/2025/Conference/Submission13824/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13824/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13824/Reviewer_NLJn"
],
[
"ICLR.cc/2025/Conference/Submission13824/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13824/Reviewer_nbs8"
],
[
"ICLR.cc/2025/Conference/Submission13824/Reviewer_nbs8"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to Reviewer Q71R\", \"comment\": \"We appreciate the thorough feedback from the reviewers. Let us address each point:\\n\\n**1. Regarding the evidence of clustering in classification tasks:**\\nWe agree that more clarity would be helpful. The clustering property is actually well-demonstrated for classification tasks in multiple ways: (i) Fig. 1 (right) and Fig. 9 (right) show clear clustering patterns for symbolic sentiment classification; (ii) Our quantitative results in Table 2 show that cluster-based search significantly outperforms random selection on classification tasks (e.g., symbolic sentiment: 51.5% vs 51.3% for GPT-Neo-2.7B, 82.3% vs 71.7% for Qwen-2.5-14B), which would not be possible without the underlying clustering property; (iii) The partial derivative analysis in Fig. 3b shows consistent U-shaped patterns across all tasks, including classification.\\n\\n**2. On scaling experiments:**\\nWe have conducted extensive additional experiments with larger demonstration pools and varying numbers of demonstrations. Specifically, in Appendix D, we present results for k_total = 16 with k = 4 (Table 5) and k = 10 (Table 6). The cluster-based search maintains its effectiveness even at these larger scales - for instance, with k_total = 16 and k = 4, it achieves 92.5% average accuracy on Qwen-2.5-14B compared to 85.0% for random selection. Notably, exhaustive search becomes computationally infeasible at these scales, highlighting the practical importance of our method.\\n\\n**3. Regarding more thorough analysis:**\", \"we_have_enhanced_our_analysis_in_several_ways\": \"- Added standard deviations to all accuracy results in Table 2 to show the consistency of our method\\n- Included results with larger k=10 demonstrations (Table 6) to demonstrate effectiveness with more intermediate demonstrations\\n- The performance generally improves with more demonstrations (e.g., Table 8 shows consistent improvements when moving from 4 to 8 demonstrations across different models)\\n\\n**4. On selection methods comparison:**\\nWhile we focused primarily on entropy-based selection, our method's effectiveness is demonstrated through both ideal (oracle) selection (Tables 1, 4) and practical entropy-based selection (Tables 2, 5, 6). The consistent performance improvements across these different criteria support the robustness of our approach.\\n\\n**5. Regarding time performance:**\\nFigure 7 provides a clear visualization of the efficiency gains - our method achieves 92% to nearly 100% reduction in search time while maintaining comparable accuracy to exhaustive search. This dramatic improvement in computational efficiency, combined with minimal accuracy loss, demonstrates the practical value of our approach.\\n\\nThese results collectively demonstrate both the theoretical validity and practical utility of our clustering-based approach across different scales, tasks, and selection criteria.\"}",
"{\"comment\": \"**Additional Experiments**\\n\\nFollowing your suggestion for more comprehensive experiments with larger k values, we have conducted new experiments with k_total=16 and k=10, reported in Table 6 of Appendix D. These results importantly demonstrate:\\n\\n1. Cluster-based Search continues to outperform Random Selection even with a larger number of demonstrations. Specifically, the accuracy improvements over Random Selection range from 1.5% to 5.9% across different models and tasks.\\n\\n2. The comparison with Exhaustive Search becomes especially relevant here - with k=10 and k_total=16, Exhaustive Search is computationally infeasible due to factorial complexity, highlighting the practical value of our approach.\\n\\n3. The effectiveness of Cluster-based Search with larger k values (where more middle demonstrations are present) provides additional support for our method's robustness, while acknowledging that middle demonstrations still contribute to model performance.\\n\\nThese new results complement our existing experiments with k=4 and help establish that our method scales effectively to scenarios with more demonstrations. Thank you for helping us make our evaluation more comprehensive.\\n\\nThe effectiveness of our simplified approach, combined with our more nuanced theoretical understanding and comprehensive experimental validation, provides a stronger foundation for our work. Thank you for helping us improve the clarity, completeness, and rigor of our presentation. We hope that the reviewer can increase your score accordingly.\\n\\n(2/2)\"}",
"{\"comment\": \"Dear Reviewer nbs8,\\n\\nWith the discussion period ending in approximately 1.5 days, we would greatly appreciate your feedback on our rebuttal. We believe we have thoroughly addressed your concerns and would welcome your assessment. If you find that we have successfully resolved your major points, we respectfully request that you consider adjusting your score to reflect these improvements.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"metareview\": \"This paper investigates the prompt embedding space to address the order sensitivity of in-context learning (ICL) in decoder-only LLMs, revealing that prompts with shared first and last demonstrations exhibit closer embeddings, particularly with stronger clustering around the first demonstration. By analyzing the role of positional encoding and causal attention masks in this clustering phenomenon, the authors propose cluster-based search, a novel method that enhances the selection and ordering of demonstrations in self-adaptive ICL settings. The paper is well-written, and in the discussion phase, the authors addressed almost all of the reviewers' concerns.\", \"additional_comments_on_reviewer_discussion\": \"A reviewer QWk7 failed to submit the review. However, according to the other three reviewers' consistent and positive ratings, I am confident to accept this paper.\"}",
"{\"summary\": \"This paper studied the ordering effects of demonstrations in in-context learning (ICL) and claimed that the first and last demonstrations are the most crucial ones for effective demonstrations using both empirical and theoretical analyses. Based on this observation, this paper proposed a cluster-based search method to find out effective demonstration orders (considering only the first and last demonstrations instead of all demonstrations), which will not suffer from the efficiency issue in Exhaustive Search. The experiments showed that the proposed method achieve small drop in accuracy but significant improvement in efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed idea of cluster-based search is simple yet effective for ICL.\", \"The performance of the proposed method, especially efficiency improvement, is very promising.\"], \"weaknesses\": [\"Some claims are not well supported by the empirical analyses. The cluster structure of GPT-2 model in Figure 1 seems unclear, compared to the other two LLMs. Figure 3 (a) shows that the clusters also share the same second demonstrations with high percentage, and for the two bottom figures, the percentage of sharing the same second demonstrations is even higher than the percentage of sharing the same last demonstrations. These observations may be conflict with the main claim of this work. Also, the analyses about the last demonstration seem to be less convincing, e.g., lines 340-346.\", \"The theoretical analyses are counter intuitive. According to Prop. 4.1, the embedding of the transformer layers will eventially the same if two promopts share the same first input token. I cannot understand this claim in the proof also, in which the authors mentioned that \\\"if causal attention mask is applied, then x_1(t) = x'_1(t) for all t >= 0.\\\" I am not sure why this assumption holds. Intuitively, if this proposition holds, I may infer that only the first demonstration will affect the performance and the last demonstration will not matter too much, which is different from the authors' claim.\", \"More comprehensive experiments are required. In Table 1, the case of Random demostrations is not included. It would be useful to also compare with Random ordering as in Table 2. Also, they authors used k=4 in the experiments, it might be also important to evaluate larger k values, e.g., 10 or 20. The main claim of this paper is that the demonstrations in the middle are not very important to the performance of ICL, but using only a few demonstrations in the middle (as in the experiments) may not be as convincing as using many demonstrations in the middle.\"], \"questions\": \"Please refer to my concerns in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors investigate the few-shot setting and the order of given demonstrations/examples in the prompt. Analyzing the last hidden state of the last layer of decoder-only transformers, they study the clustering property, which prompts sharing the first and last demonstration. Experiments are conducted in two domains: classification and reasoning. Each is divided into two tasks, and the classification sub-tasks are further modified into symbolic ones.\\nThe explanation proposed is that this property depends highly on the causal attention mask and the positional encoding. The first demonstration clustering depends on the causal attention mask. However, the last demonstration clustering depends on a more complex interplay of the causal attention mask and the positional encoding.\\nFollowing their findings, the authors propose a selection and ordering method based on the uncovered clusters. Experiments are conducted using their methods with an already-used entropy-based search. They compare their methods with an oracle and unmodified entropy methods. Their findings show that the clustering-based method while suffering a slight drop in performance, their method is more than 90% faster.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Running large language models is costly, and few-shot in-context learning is a common approach to alleviate the cost. The proposed method is simple and greatly reduces search time, making a practical contribution.\", \"Even though the theoretical assumptions are strong, their partial derivative analysis is original and clearly advocates for the clustering property.\", \"The cluster-based search proposed by the authors is well explained.\"], \"weaknesses\": [\"Too little evidence of clustering is given on the classification tasks, and clustering is unclear on The 2D projection (Figure 1, Figure 4).\", \"Few experiments have been done varying the number of demonstrations and the pool size; it would be really beneficial to give some insight on the scaling possibility of the method.\", \"A more thorough analysis of the results would be appreciated to confirm the findings, for example: Do the prompts sharing a close representation share similar scores? (what is the standard deviation ?) How does the performance change with the number of intermediate demonstrations? ( Some insights are given, but more results would greatly improve the demonstration).\", \"Not enough selection methods are considered for comparison in terms of time and scores.\", \"A table showing time performance and or gap with other methods is needed.\"], \"questions\": [\"How does a variant of Figure 3b with demonstrations instead of chunks of text compare?\", \"Do the prompts that share a close representation get similar scores? (what is the standard deviation ?)\", \"How does the performance change with the number of intermediate demonstrations? ( Some insights are given, but more results would significantly improve the demonstration)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer nbs8\", \"comment\": \"We appreciate the thoughtful feedback from the reviewer. Below we address each concern in detail.\\n\\n**Model Selection**\\n\\nWe appreciate the reviewer's concern about model selection. We have expanded our evaluation to include more recent architectures, specifically Phi-2 and the newly released Qwen-2.5 models. As shown in Table 2, our key findings hold consistently across these newer models: Cluster-based Search continues to significantly outperform random selection while maintaining comparable performance to exhaustive search. For example, on Qwen-2.5-14B, Cluster-based Search achieves 89.6% average accuracy across tasks compared to 85.0% for random selection, while requiring only a fraction of the computational cost of exhaustive search. This demonstrates that our method's effectiveness generalizes well to contemporary architectures.\\n\\n**Dataset Coverage**\\n\\nWe thank the reviewer for this valuable suggestion about including mathematical tasks. We have expanded our evaluation to include arithmetic word problems from the AddSub dataset (Hosseini et al., 2014). As shown in Table 2, our findings generalize well to this mathematical domain - Cluster-based Search maintains its advantages over random selection while achieving comparable performance to exhaustive search. For example, on Phi-2, Cluster-based Search achieves 77.9% accuracy on mathematical tasks compared to 75.1% for random selection, and on Qwen-2.5-14B, it achieves 87.1% compared to 85.1% for random selection.\\n\\n**First vs Last Demonstration Analysis**\\n\\nWe greatly appreciate this insightful observation. You are correct that our initial presentation didn't fully address the asymmetric nature of first versus last demonstration clustering. Based on this feedback, we have made substantial revisions to better align our claims with the empirical evidence:\\n\\n1. We have refined our main claim to explicitly acknowledge that while both first and last demonstration clustering exist, first-demonstration clustering tends to be stronger in practice. This more nuanced characterization better reflects our empirical findings across different analyses.\\n\\n2. Following this insight, we conducted a thorough ablation study (Section 5.2.1, Table 3) comparing first-demonstration-only clustering versus combined first-and-last demonstration clustering. The results are particularly illuminating:\\n - First-only clustering achieves comparable or sometimes better accuracy (e.g., 89.6% vs 88.2% on Qwen-2.5-14B)\\n - This simpler approach reduces computational complexity from O(k_total(k_total-1)) to O(k_total)\\n\\n3. While last-demonstration clustering appears less pronounced, we provide multiple lines of evidence for its existence:\\n - Figure 3a shows elevated percentage frequencies for last demonstrations compared to middle positions\\n - Figure 3b demonstrates higher partial derivative norms for ending tokens versus middle tokens\\n - New analysis in Figure 5 reveals attention weights to the last token steadily increase across layers, peaking in final layers (please refer to Section 4.1 for more detail)\\n\\nBased on these findings, we have updated our Cluster-based Search to focus solely on first-demonstration selection, achieving both better computational efficiency and comparable performance. This revision provides a more accurate and practical approach while maintaining empirical rigor.\\n\\nWe hope that the reviewer is satisfied with our response and will increase their score accordingly.\"}",
"{\"comment\": \"Dear Reviewer Q71R,\\n\\nWith the discussion period ending in approximately 1.5 days, we would greatly appreciate your feedback on our rebuttal. We believe we have thoroughly addressed your concerns and would welcome your assessment. If you find that we have successfully resolved your major points, we respectfully request that you consider adjusting your score to reflect these improvements.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to the author's rebuttal\", \"comment\": \"I appreciate the detailed reponses from the authors, in which the new results and analyses have addressed my major concerns. Therefore, I will increase my rating to 6 and vote for acceptance.\"}",
"{\"title\": \"Response to Reviewer NLJn\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed and constructive feedback. We have carefully addressed each of your concerns as follows:\\n\\n**Empirical Analysis and Main Claims**\\n\\nBased on your observations, we have made significant revisions to better align our claims with the empirical evidence:\\n\\n1. **Refined Main Claim**: While we maintain that both first and last demonstration clustering exist, we now explicitly emphasize that first-demonstration clustering tends to be stronger in practice. This refinement better reflects the empirical evidence across different visualizations and analyses.\\n\\n2. **Improved Methodological Alignment**: Following this refined understanding, we have updated our Cluster-based Search to focus only on first-demonstration selection, rather than both first and last as in the previous version. Our ablation studies (Table 3) show this simpler approach achieves comparable or sometimes better accuracy while significantly reducing computational complexity from O(k_total(k_total-1)) to O(k_total).\\n\\n3. **Evidence for Last-Demonstration Clustering**: While we acknowledge that last-demonstration clustering may appear less pronounced in some visualizations, multiple lines of evidence still support its existence:\\n - Figure 3a shows elevated percentage frequencies for last demonstrations compared to middle positions\\n - Figure 3b demonstrates higher partial derivative norms (token importance) for the last chunk versus middle chunks\\n - We've added new evidence in Figure 5 showing attention weights to the last token steadily increase across layers and peak in final layers (reaching 0.1-0.2), suggesting prompts sharing last tokens are likely to produce similar next-token predictions\\n\\nThis more nuanced characterization better captures the asymmetric nature of demonstration importance while maintaining empirical rigor. The effectiveness of our simplified first-demonstration-only Cluster-based Search provides additional validation for this refined understanding.\\n\\n**Theoretical Analysis**\\n\\nYour questions about Proposition 4.1 highlight important technical aspects that we will clarify:\\n\\nThe statement \\\"if causal attention mask is applied, then x_1(t) = x'_1(t) for all t \\u2265 0\\\" follows directly from the causal attention mask mechanism. Because of causality, the first token's embeddings cannot be influenced by subsequent tokens - they can only attend to themselves. Therefore, if two sequences share the same first token (x_1(0) = x'_1(0)), their first-token embeddings will remain identical through all layers, as each layer's computation for the first token depends solely on the previous layer's first-token embedding.\\n\\nYour intuition about the proposition's implications for model behavior is insightful. Indeed, if the theoretical conditions held perfectly in practice, it might suggest overwhelming dominance of the first demonstration. However, we've discovered a more nuanced reality that we now better explain in Section 4.1.\\n\\nSpecifically, we've added an analysis of attention weight patterns across layers using the Qwen-2.5-72B model that reveals three distinct phases:\\n\\n1. **Initial Phase (layers 1-40)**: First-token attention dominates (0.8-0.9), aligning with Proposition 4.1 and the attention sink phenomenon\\n2. **Transition Phase (layers 40-60)**: First-token attention sharply declines from 0.8-0.9 to 0.2-0.4 as attention begins redistributing\\n3. **Final Phase (layers 60-80)**: First-token attention oscillates between 0.4-0.7, while last-token attention steadily increases to peak at 0.1-0.2\", \"this_progression_helps_reconcile_our_theoretical_and_empirical_findings\": \"while the theoretical tendency toward first-token clustering manifests in early layers, practical requirements of causal language modeling lead to attention redistribution in later layers. When combined with positional encodings, this creates the dual clustering behavior we observe.\\n\\nOur refined understanding aligns with recent work showing that causal transformers can infer positional information even without explicit positional encoding, while adding positional encoding enhances this capability. This helps explain why we observe both types of clustering, though first-demonstration clustering tends to be stronger in practice. Please refer to Section 4.1 in the revision for more detail.\\n\\n(1/2)\"}",
"{\"summary\": \"The paper explores the issue of demonstration order sensitivity in large language models (LLMs) during in-context learning (ICL) and uncovers a clustering phenomenon in the embedding space, where prompts with the same first and last demonstrations tend to cluster together. Through theoretical analysis and empirical evidence, the paper identifies that this clustering effect stems from the interaction of causal attention masks and positional encoding. Moreover, they propose a \\\"Cluster-based Search\\\" method that significantly reduces the computational complexity of selecting and ordering demonstrations while maintaining high model performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Clear Argumentation: The paper is well-structured, with clear explanations that make the objectives and contributions easy to follow.\\n2. Robust Proofs: The theoretical analysis is thorough, supporting the proposed mechanisms in in-context learning.\\n3. Comprehensive Experiments: The experiments are detailed and varied, effectively demonstrating the method\\u2019s efficacy across multiple tasks.\", \"weaknesses\": \"1. The models used in this study seem somewhat outdated. Models with the equivalent size should include newer architectures, such as LLaMA 3, Phi, or similar. Why were these not used?\\n2. The datasets and tasks included in the study are limited. For instance, why is there no mathematical task such as GSM8k included in the paper\\n3. While the authors highlight the importance of the first and last demonstrations in ICL, the figures in the paper suggest that the first demonstration may be particularly or even most significant. However, in the cluster-based method, the authors did not conduct an ablation study that uses only the first or only the last demonstration in clustering to analyze the contributions of the first and last demonstrations independently.\", \"questions\": \"My main concerns have been listed above. I look forward to the authors' response and am willing to reopen and adjust the score upward.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to the author's rebuttal\", \"comment\": \"I appreciate the detailed reponses from the authors, I will increase my rating to 6 and vote for acceptance.\"}"
]
} |
1Iq1qIsc2s | Revisiting Positional Information in Transformers in the era of Fused Attention | [
"Aditya Kane",
"Ali Hassani",
"Humphrey Shi"
] | Imparting positional information has been a crucial component in Transformers due to attention's invariance to permutation. Methods that bias attention weights, like Relative Positional Bias (RPB), have been preferred choice in more recent transformer-based architectures for vision. In parallel, fused attention has become the standard implementation for attention, largely thanks to open source solutions such as Flash Attention and FMHA. However, it is not trivial to fuse explicit biasing or masking of attention weights into a fused attention kernel without affecting its performance. In this scenario, position embeddings present themselves as a viable replacement for attention weight biases. Position embeddings are applied to the tokens directly, decoupled from the attention mechanism, thereby sidestepping the problems that arise with attention weight biases in fused kernels. In this work, inspired by the booming LLM landscape, we analyze the applicability of Rotary Position Embeddings (RoPE) as a replacement for RPBs in vision models. Unlike RPB which explicitly biases attention weights, RoPE biases the dot product inputs (query and key) directly and ahead of the attention operation. We empirically show the prowess of RoPE over RPBs in terms of accuracy and speed. We study multiple implementations of RoPE and show that it is sufficient to use only a fraction of hidden dimensions for RoPE to achieve competitive performance. We also develop a fast implementation for Axial RoPE. Together with the most performant fused attention implementations, and our fast RoPE implementation, we observe inference speedups compared to RPB with improved or similar accuracy. We foresee RoPE as a replacement for RPBs, paving the way for the widespread adoption of fused attention in transformer-based vision models. | [
"Efficient Vision Transformers",
"Position Embeddings",
"CUDA"
] | Reject | https://openreview.net/pdf?id=1Iq1qIsc2s | https://openreview.net/forum?id=1Iq1qIsc2s | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zsbRwnk4Vm",
"usiFI3eFyv",
"ulw2tlxEZO",
"q48TsPFnov",
"iL4PTc2lIZ",
"de3l7WihRO",
"bVPskiLQqL",
"Yc01wurNZx",
"YRoWtVLWkR",
"QHmapjObIL",
"JVNYrdSjOe",
"B3dhX6xmff",
"28Wxy1LX0D"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_review",
"meta_review",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1731792277069,
1731791824446,
1730685962283,
1730783476896,
1733915691280,
1730555988679,
1737524188324,
1732405958431,
1732488336788,
1731792476480,
1732406287127,
1732662328299,
1731792375436
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12373/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12373/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12373/Reviewer_Ue2F"
],
[
"ICLR.cc/2025/Conference/Submission12373/Reviewer_uvBo"
],
[
"ICLR.cc/2025/Conference/Submission12373/Area_Chair_gErD"
],
[
"ICLR.cc/2025/Conference/Submission12373/Reviewer_c8uQ"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12373/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12373/Reviewer_Ue2F"
],
[
"ICLR.cc/2025/Conference/Submission12373/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12373/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12373/Reviewer_uvBo"
],
[
"ICLR.cc/2025/Conference/Submission12373/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"We thank the reviewer for their comments and constructive criticism. We would like to clarify the reviewer\\u2019s concerns here:\\n\\n_***Regarding weakness (1)***_, we want to clarify Section 3.3.2 is a review of different rotary embedding approaches [1,2]. Our ablations have been limited to k_rope and shared angles in Tables 4 and 5, but **following your suggestion, we will include ablations of all design choices mentioned in Section 3.3.2**. Thank you for bringing this to our attention!\\n\\n_**Regarding weakness (2)**_, since we were limited to image classification due to compute constraints, we report ImageNet-1k accuracy and inference throughput for different positional biases. It is indeed commonly accepted that RoPE has equal or better accuracy than RPB, and thus we focus our efforts on showing why RoPE is more scalable than RPB.\\n\\n_**Regarding weakness (3)**_, we would like to point out that even though the gains seem marginal, they cannot be achieved trivially. The gains also support the hypothesis that RoPE consistently outperforms RPB, while being hardware-friendly at the same time.\\n\\n_**Regarding weakness (4)**_, we request the reviewer to look at the common response for clarification.\\n\\n_**Regarding question (1)**_, we hypothesize that RoPE outperforms RPB (in terms of accuracy) because it simply provides the model a spatial bias, without having the need to learn the same. In contrast to this, RPB requires the model to learn this spatial bias itself, which might not be optimal given the data-hungry nature of transformer-based vision models.\\n\\n_**Regarding question (2)**_ yes, we expect the gains to effectively scale to larger models and resolutions. As the model size and resolution increases, RPB will indeed become a larger bottleneck due to the nature of fused attention mechanisms. In addition, RoPE will continue to scale up training efficiently as well, since it is a simple elementwise operation in both the forward and backward pass. Attention biases become a much larger bottleneck in training, since their backward pass is a reduction operation, and typically bound by memory bandwidth. We can see this in comparative training throughputs of RoPE and RPB models. For example, a ViT Base model with RPB has a training throughput of 6071 images/sec, whereas a ViT Base with Axial RoPE has a training throughput of 6596 images/sec, thereby observing a training speedup of ~8.5%.\\n\\n**References:**\\n\\n[1] Crowson, Katherine, et al. \\\"Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers.\\\" Forty-first International Conference on Machine Learning. 2024.\\n\\n[2] Heo, Byeongho, et al. \\\"Rotary position embedding for vision transformer.\\\" arXiv preprint arXiv:2403.13298 (2024).\"}",
"{\"title\": \"Common response to all reviewers\", \"comment\": \"We thank all reviewers for their insightful comments and constructive criticism.\\n\\nWe would like to clarify the contributions and intended message of our work here. Our work has the following objectives:\\n\\n1. To motivate the community to prefer positional embeddings (RoPE or otherwise) over attention weight biases in both large and small transformer-based vision models. We also want to provide concrete insights about the usage of positional embeddings and the downsides of using attention weight biases (like Relative Positional Biases, RPB) with fused attention implementations. \\n2. Consequently, we intend to study two flavors of RoPE, compare them, and introduce an efficient CUDA-based implementation to expedite this shift. Previous works which use RoPE in transformer-based vision models overlook training speed and we intend to improve the case for RoPE in that direction. \\n3. We observe that applying RoPE to a fraction of feature dimensions per head is enough to impart positional information, and we provide empirical evidence of the same.\"}",
"{\"summary\": \"This paper explores the use of Rotary Position Embeddings (RoPE) in vision transformer models and provides an analysis of the differences between Relative Positional Biases (RPE) and RoPE through empirical experiments. Additionally, it proposes a fused CUDA-based implementation to improve inference speed. The paper also presents various design choices for RoPE, supported by experimental results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow. The figures demonstrate the ideas clearly.\"], \"weaknesses\": [\"**Unclear Contribution:** The novelty of the paper is uncertain, as RoPE has been previously applied to vision transformers. Notably, Heo et al. (2024) and Chu et al. (2024) have already explored RoPE in this context, with 2D RoPE in particular resembling Heo et al.'s work. Further discussion is needed to clarify the differences between the current approach and previous implementations. A comparison in the experimental section is also recommended. Additionally, the authors should consider reorganizing the contribution sections, as the first two contributions appear unconvincing.\", \"**Inconclusive Results:** The paper lacks a definitive conclusion regarding the performance of 2D RoPE versus Axial RoPE. For instance, Table 4 shows that 2D RoPE outperforms Axial RoPE, warranting further discussion.\", \"**Limited Generalization Testing:** The paper does not assess the generalization ability of Axial RoPE across downstream tasks (e.g., detection and segmentation). Additional experiments to showcase RoPE\\u2019s generalization potential are recommended.\"], \"questions\": [\"In comparing Table 4 and Table 5, the shared angles consistently outperform the non-shared angles. Why, then, did the authors choose to use non-shared angles in Axial RoPE?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes using RoPE embeddings, a popular and widely used method for LLMs for vision transformers motivated by imperial gains in accuracy and efficiency when applying to multiple models of various sizes. For this, they extend RoPE to fit image space and tackle the challenge of implementing it and studying multiple rotary positional embedding implementations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The expansion of RoPE to images is presented clearly and makes intuitive sense\", \"The paper does a great job of motivating and describing the CUDA implementation.\", \"It is deeply appreciated that they go deeper and explore multiple different Rotary Positional Embeddings and report the comparisons\"], \"weaknesses\": [\"While a lot of small improvements are introduced in the method (3.3.2) the support or estimation of impact is somewhat lacking in the results.\", \"While the impact of k is detailed and appreciated, the measurement of performance is limited to accuracy and makes it hard to understand the gains or sacrifices associated with the implementations.\", \"The paper claims \\\"noteworthy gains\\\" across models, however the gains in Table 2 seem relatively limited (0.1-0.2) in most cases.\", \"Limited novelty, while the expansion of RoPE makes sense, the novelty both in terms of method and results might be limited.\"], \"questions\": [\"Could you expand on the justification for RoPE's superior performance compared to RPB, beyond the intuitive explanations provided?\", \"Would the gains in efficiency scale to larger model size and resolution combinations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper explores the application of Rotary Positional Embeddings (RoPE) in vision transformers by extending it to 2D and Axial RoPE and developing efficient CUDA implementation. Ablation studies analyze the design choices of RoPE, including positional coordinates, angle sharing, and the fraction of embeddings used. However, the novelty of this work is somewhat limited and the Axial RoPE design appears to be incremental. The performance gains reported are relatively minor and might not justify the added complexity in some cases. While the CUDA implementation adds value, the contributions do not meet the standards of significant technical or conceptual innovation expected for acceptance at ICLR.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewers Ue2F and c8uQ raised concerns about the novelty of the paper, particularly when compared to prior works. In evaluating the submission, I did not factor in a comparison with the concurrent ECCV'24 work in the final decision.\", \"Several reviewers highlighted the absence of generalization experiments beyond ImageNet-1k classification. While Reviewer uvBo acknowledged the computational advantages of RoPE, they also expressed concerns about the relatively minor performance gains. The authors' additional ablation studies demonstrated slight variations in accuracy across different configurations but failed to fully address the concerns regarding RoPE's limited impact on overall performance.\", \"While the authors addressed some concerns, the rebuttals did not significantly strengthen the contributions. The novelty remains limited, and the lack of downstream experiments and insufficient comparisons weaken the paper's case for acceptance.\"]}",
"{\"summary\": \"The paper proposes Rotary Postional Embeddings as a replacement for relative positional bias in transformers. They shows that it is leads to better accuracy and faster implementation. The paper tries to tackle the issue of latency when RPB is used with modern attention implementations such as flash attention.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"Instead of applying RPB which slows down the attention computation in flash attention modules, the paper devices a way to add positional embeddings before attention is computed.\\nSince RoPE implementation is not dependent on which attention module is used, it can be integrated with any of the modern fast modern attention implementation unlike RPB.\\nDeveloped an efficient CUDA implementation for RoPE with an easy-to-use Python wrapper.\\nFound that applying RoPE to a fraction of embedding is enough.\", \"weaknesses\": \"The fact that RoPE will improve performance in ViT is not a novel idea and has already been shown in\\nP. Jeevan and A. Sethi, \\\"Resource-efficient Hybrid X-formers for Vision,\\\" 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2022, pp. 3555-3563, doi: 10.1109/WACV51458.2022.00361. This paper has not been cited.\\nThe scope of this paper is limited to cases when fused attention kernel is used. When RPB is introduced in this case, it hampers the fast attention compute. \\nMost of the paper is just a review of postional embeddings and biases. \\nTable 3 shows best performance when there is no bias introduced. The paper does not explain why this is so and also why even RoPE is needed then.\\nThe ablations and experiments needs to more elaborated.\", \"questions\": \"Explain in detail the actual contributions of this paper and what novel ideas where brought in?\\nIs Axial RoPE your contribution or has it been taken from another paper and you just did a lot of analysis on it?\\nWhy do we even need RoPE if no bias gives best results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"**Following your suggestion, we perform ablations of all design choices in Table 1** -- position co-ordinates and angle generator. We perform the ablations while keeping the settings on Axial and 2D RoPE for the other two design choices (k and sharing of angles across heads). We will add the same in the manuscript.\\n\\n_Thanks a lot for the suggestion!_\", \"results_from_additional_ablations\": \"| | | Position co-ordinates | |\\n|:---------------:|:--------------------:|:---------------------:|:----------------:|\\n| | | Absolute indices | Between -1 and 1 |\\n| **Angle generator** | Exponential decay | 81.0 | 80.7 |\\n| | Bounded log-sampling | 81.2 | 81.4 |\\n\\n_**Table 1:**_ Ablation on angle generator and position co-ordinates with non-shared angles and $k_{rope}=2$.\\n\\n| | | Position co-ordinates | |\\n|:---------------:|:--------------------:|:---------------------:|:----------------:|\\n| | | Absolute indices | Between -1 and 1 |\\n| **Angle generator** | Exponential decay | 81.4 | 81.0 |\\n| | Bounded log-sampling | 81.0 | 80.9 |\\n\\n_**Table 1:**_ Ablation on angle generator and position co-ordinates with shared angles and $k_{rope}=1$.\"}",
"{\"comment\": \"Thank you for the authors\\u2019 response. After thoroughly reviewing both the common response and the specific reviews, I still have some concerns regarding the experimental conclusions and whether the claimed contributions meet the standards of ICLR. The contributions highlighted in the common response appear incremental. Additionally, the inconclusive results comparing 2D RoPE and Axial RoPE remain insufficiently addressed, and the generalization ability of RoPE has not been evaluated. To make the paper stronger, the contributions need to be clearly clarified, and include more significant technical or conceptual novelty. Overall, I believe the current paper requires some revisions to be accepted.\"}",
"{\"comment\": \"We thank the reviewer for their insightful comments.\\n\\n_**Regarding novelty**_, we request the reviewer to take a look at the common response for clarification.\\n\\n_**Regarding applicability outside fused attention**_, we would like to clarify that RoPE and our implementations of RoPE can be used with both fused or unfused attention alike. In fact, we observe significant throughput gains in both cases, as explained in Table 3. \\n\\n_**Regarding Table 3,**_ Table 3 shows throughput of the respective models with RPB, Axial RoPE and without any bias. The \\u201cNo bias\\u201d case implies there is no additional computation done in the model to impart positional bias. Thus, it is expected that the models without any bias would be slightly faster, since they don\\u2019t perform a chunk of computation as compared to other models with biases.\\n\\n**We will cite the mentioned work in the Related Works section. Thanks for bringing the same to our attention!**\"}",
"{\"comment\": \"**Following your suggestion, we ran experiments with shared angles for all heads and k=8 for the `small` variant of all models.**\\n\\nWe observed that while NAT and DiNAT do not exhibit any noticeable changes, we observe that Swin-small degrades by 0.3%, while ViT-small improving by 0.1%. From this set of experiments, we conclude that the actual choice of hyperparameters would depend on the model at hand the attention mechanism used in the same. From the above results, we would only hypothesize that since NAT and DiNAT already have an implicit spatial bias built into the attention mechanism, they are not significantly affected by changes in the implementation of RoPE. However, since Swin and ViT do not have any implicit spatial bias, they are more sensitive to changes in RoPE's implementation. \\n\\n_Thanks a lot for your suggestion!_\", \"results_from_additional_ablations\": \"| | Original | Shared PE, $k_{rope}=8$ |\\n|-----------------|:----------:|:-------------------------:|\\n| **NAT-small** | 83.8 | 83.8 |\\n| **DiNAT-small** | 83.9 | 83.9 |\\n| **Swin-small** | 83.1 | 82.8 |\\n| **ViT-small** | 81.4 | 81.5 |\\n\\n_**Table 1:**_ Ablations for `small` variant of all models with shared PE across all heads and $k_{rope}=8$.\"}",
"{\"comment\": \"I thank the authors for their general and individual replies to my questions, especially for extending the ablation of the design choices. I still have concerns about the relative marginality of the improvements, however, I believe that the overall analysis makes for a good contribution. I maintain my rating.\"}",
"{\"comment\": \"We thank the reviewer for their comments and constructive criticism. We would like to clarify the reviewer\\u2019s concerns here:\\n\\n_**Regarding unclear contribution**_, we request the reviewer to look at the common response for clarification. \\n\\n_**Regarding inconclusive results, shared angles and limited generalization**_: \\n\\n**Following your suggestion, we are currently performing the experiments with k=8 and shared angles across all heads. We will share an update when we have some numbers regarding the same.** At the time of submission, we chose to stick to a reasonable default, since performing experiments with all possible settings is infeasible. We could not include experiments on detection and segmentation owing to limited academic compute.\"}"
]
} |
1IeCqgULIM | Abstracting and Refining Provably Sufficient Explanations of Neural Network Predictions | [
"Shahaf Bassan",
"Yizhak Yisrael Elboher",
"Tobias Ladner",
"Matthias Althoff",
"Guy Katz"
] | Despite significant advancements in post-hoc explainability techniques for neural networks, many current methods rely on approximations and heuristics and do not provide formally provable guarantees over the explanations provided. Recent work has shown that it is possible to obtain explanations with formal guarantees by identifying subsets of input features that are sufficient to determine that predictions remain unchanged by incorporating neural network verification techniques. Despite the appeal of these explanations, their computation faces significant scalability challenges. In this work, we address this gap by proposing a novel abstraction-refinement technique for efficiently computing provably sufficient explanations of neural network predictions. Our method *abstracts* the original large neural network by constructing a substantially reduced network, where a sufficient explanation of the reduced network is also *provably sufficient* for the original network, hence significantly speeding up the verification process. If the explanation is insufficient on the reduced network, we iteratively *refine* the network size (by gradually increasing it) until convergence. Our experimental results demonstrate that our approach substantially enhances the efficiency of obtaining provably sufficient explanations for neural network predictions while additionally providing a fine-grained interpretation of the network's decisions across different abstraction levels. We thus regard this work as a substantial step forward in improving the feasibility of computing explanations with formal guarantees for neural networks. | [
"explainability",
"XAI",
"explainable AI"
] | Reject | https://openreview.net/pdf?id=1IeCqgULIM | https://openreview.net/forum?id=1IeCqgULIM | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zdJei2G1Gj",
"vIRXjKHkPs",
"tYOtMcXbpg",
"r328QLI3qd",
"qF07N2wBUe",
"oXzpPzkCma",
"ln85SEdgsQ",
"j030sbP7ND",
"huVXc2V0Qa",
"gpYa6delN4",
"fkWBpbRCXr",
"bosqVXWziY",
"USF7VMbsyW",
"T7EtUPWLEr",
"PB8rMRPPoR",
"OGvNTrzdeE",
"Ne9kRpgywX",
"Hust5m21rl",
"ENx7eWIXB1",
"BgAZbqgWvg",
"9bNiR1PtPk",
"9KgWFEDJGw",
"91k2IrZoED",
"1liYC8EEAm"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732490020099,
1731944292608,
1732654332877,
1732717575690,
1734964256529,
1732205946022,
1732539456376,
1730580264877,
1729241553474,
1731944549179,
1737523549215,
1731944527693,
1730523993009,
1732539502137,
1732539626176,
1731944403439,
1733149313369,
1732010824436,
1731944237824,
1732604169296,
1731944581640,
1731944371526,
1730712100680,
1732530863105
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_eaKy"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_ovvZ"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Area_Chair_oM6c"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_eaKy"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_shdo"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_ovvZ"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_shdo"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_t2CN"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_t2CN"
],
[
"ICLR.cc/2025/Conference/Submission3031/Reviewer_shdo"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Authors,\\n\\nThank you for your thoughtful response. Similar to my works, I find this research demonstrates significant potential in utilizing formal methods for explainable AI. And I found it inspiring. I thoroughly enjoyed reading your paper and appreciate the effort you have put into expanding the dataset beyond my earlier suggestions, as my biggest concerns regarding scalability to other datasets and models are somewhat addressed.\\n\\nI do not have any further questions and would like to maintain my score as it is if the paper revision is reflected accordingly.\"}",
"{\"comment\": \"**Further discussion is needed regarding the (non)-unique generated subsets and the feature orderings of our method**\\n\\n\\nWe thank the reviewer for highlighting the importance of feature ordering and the non-uniqueness of generated subsets, a common trait in methods for this task. We do not discuss this thoroughly since it is not the primary focus of our work and has been explored in prior studies [3,4,5]. However, we agree with the reviewer on its importance and will address it thoroughly in the final draft:\\n\\n\\nWe first highlight that while sufficient explanations are indeed not unique, their intrinsic characteristics enable them to capture aspects often missed by additive attribution methods, such as feature interactions and non-linear behaviors. For example, the authors of Anchors [6] demonstrate that this type of explanation frequently yields results that are more intuitive and preferred by humans.\\n\\n\\n\\n\\n\\n\\nThe widely adopted approach [3,4,5] (which we also use) for obtaining subsets that are both *concise* and where the uniqueness concern is less dominant involves sorting features in descending order of their attributions and progressively removing the least significant ones first, as they are less likely to impact the classification outcome. This has the advantage of converging towards *smaller* subsets as well as to subsets that substantially *overlap* with other sufficient explanations, and hence mitigate the possible discrepancy between a random ordering over features (the \\u201cuniqueness\\u201d concern).\\n\\n\\nIn this work, we follow the ordering proposed in [3] that prioritizes features based on their (descending) sensitivity values. Following the reviewer\\u2019s comments, we will include an experiment that demonstrates the results of our approach under various feature orderings, leading to convergence toward different minimal subsets and studying the overlap of these subsets. We thank the reviewer for raising these points.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n**Comparison to brute-force occlusion**\\n\\n\\nAs mentioned earlier, our approach provides significantly stronger formal guarantees than traditional occlusion-based methods that rely on auxiliary baselines. To illustrate, we performed a brute-force occlusion experiment on MNIST with a fixed baseline of $0$, using the same pixel occlusion order as our algorithm. The average subset sufficiency was 19%, similar to heuristic methods such as Anchors and SIS, with an explanation size of 14.92 \\u00b1 8.64. In contrast, our method inherently guarantees 100% sufficiency. We will include this experiment for all benchmarks in the final draft. Thank you for the suggestion.\\n\\n\\n\\n\\n\\n\\n\\n\\n**How did you verify the sufficiency of the heuristic-based approaches?**\\n\\n\\nSince the heuristic-based approaches examined generate a subset of input features as their final output, we can evaluate the sufficiency of these subsets using the same procedure applied to verify our generated subsets. This involves validating their capacity to maintain sufficiency within an $\\\\epsilon$ perturbation, which can be integrated with a neural network verifier. \\n\\n\\n[1] Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification (Wang et al., Neurips 2021)\\n\\n\\n[2] First three years of the international verification of neural networks competition (VNN-COMP) (brix et al., STTT 2023)\\n\\n\\n[3] Verix: Towards verified explainability of deep neural networks (Wu et al, Neurips 2023)\\n\\n\\n[4] What made you do this? Understanding black-box decisions with sufficient input subsets (Carter et al., AI\\u2019STATS 2019)\\n\\n\\n[5] Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation (Huang et al., KR 2024)\\n\\n\\n[6] Anchors: High-precision model-agnostic explanations (Ribeiro et al., AAAI 2018)\"}",
"{\"comment\": \"Thank you for responding to the questions. Most concerns are resolved properly. I am willing to consider rasing my score.\"}",
"{\"comment\": \"We thank the reviewer for increasing their score to an 8, as well as for the insightful feedback and suggestions.\\n\\nWe will certainly incorporate the recommended aspects into our final version, including a comprehensive comparison with the occlusion-based benchmark.\\n\\nOnce again, we thank the reviewer for their valuable input!\"}",
"{\"metareview\": \"This paper proposes an abstract refinement framework based on neural network verification to obtain provably sufficient explanations of neural network predictions with improved efficiency. The key idea is to leverage the neuron-merging-based abstraction to obtain sufficient explanation for abstract network and derive the abstract sufficient explanation, which will be sufficient for the original network. However, the main/core result, abstract sufficient explanation, in sec 3 is immediate if one using the abstract interpretation (e.g. diff AI), linear bounding framework (e.g. crown) results in neural network robustness verification, and there are many follow-up work along this line, which can handle more complicated and larger scale models than the models studied in this work (e.g. in Appendix C). However, the current work missing this important part of literature for both experiment comparison and discussion, which is essential for the topic to use formal verification tool for post-hoc explanations.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal period, there are some major discussions regarding the run-time, scalability of the proposed method. The authors provided additional computation results over mnist with different perturbation size to show the sufficiency-runtime trade-off. The authors also acknowledge the scalability is a major challenge in the formal verification field, nevertheless, their results is better than the prior work (Wu, NeurIPS 23).\"}",
"{\"comment\": \"Thank you for your response. We have included a complete proof for Cor. 1 in the appendix as requested. We also appreciate your suggestion to rigorously prove this in greater detail, as it enhances the clarity and presentation of this aspect of the paper.\\n\\n\\nBy \\\"recognizing,\\\" we refer to the following: First, recall that an abstract neural network, as defined in the paper, differs from a neural network $f := \\\\mathbb{R}^n \\\\to \\\\mathbb{R}^c$ and is defined as $f' := \\\\mathbb{R}^n \\\\to 2^{(\\\\mathbb{R}^c)}$, where $c$ represents the number of classes. Definition 3 in our paper formalizes the concept of a sufficient explanation for an abstract neural network $f'$ (an \\u201cabstract explanation\\u201d). We define this formally as a subset $\\\\mathcal{S}\\\\subseteq \\\\\\\\{1,\\\\ldots,n\\\\\\\\}$ for which it holds that:\\n\\n$$\\n\\\\begin{aligned}\\n\\\\forall j\\\\neq t \\\\in [c] , \\\\ \\\\forall \\\\mathbf{\\\\tilde{x}}\\\\in\\\\mathcal{B}_p^{\\\\epsilon_p}(\\\\mathbf{x}): \\\\quad [\\\\text{min}(f\\u2019(\\\\mathbf{x}\\\\_{\\\\mathcal{S}};\\\\mathbf{\\\\tilde{x}}\\\\_{\\\\bar{\\\\mathcal{S}}})\\\\_{(t)})\\\\geq \\\\text{max}(f\\u2019(\\\\mathbf{x}\\\\_{\\\\mathcal{S}};\\\\mathbf{\\\\tilde{x}}\\\\_{\\\\bar{\\\\mathcal{S}}})\\\\_{(j)})], \\\\newline\\n\\\\text{such} \\\\ \\\\ \\\\text{that} \\\\ \\\\ \\\\mathbf{B}\\\\_p^{\\\\epsilon\\\\_p}(\\\\mathbf{x}):=\\\\\\\\{\\\\mathbf{\\\\tilde{x}}\\\\in\\\\mathbb{R}^n \\\\ | \\\\ ||\\\\mathbf{x}-\\\\mathbf{\\\\tilde{x}}||\\\\_p\\\\leq \\\\epsilon\\\\_p\\\\\\\\}.\\n\\\\end{aligned}\\n$$\\n\\nThe idea of this construction is that since the image of $f\\u2019$ is defined over continuous sets (in contrast to the image of $f$ which is in $\\\\mathbb{R}^n$), we can impose a stronger constraint by ensuring that the minimum achievable value for the target class exceeds the maximum value of all other classes. \\n\\nBy introducing this new concept, we can then show that under certain conditions, any subset $\\\\mathcal{S}$ meeting the criteria for a provably sufficient explanation for the abstracted model $f\\u2019$ under this definition also qualifies as a sufficient explanation for the original model $f$. This conclusion holds, for example, when the output sets of $f\\u2019$ are constructed by merging neurons and propagating bounds over $f$ using the construction employed by Ladner et al. This neuron-merging construction fits this use case since it computes an outer approximation of sets over neurons in layer $L_i$ based on layer $L_{i-1}$. We can then prove that by propagating these approximations recursively through the neural network, the (newly defined) sufficiency condition for $f\\u2019$ is strictly stronger than the original sufficiency condition of $f$. While we acknowledge the reviewer's point that this proof is not very technically complicated, we highlight the subtlety in recognizing that the careful formulation of this construction for an abstract explanation in the context of the abstract model $f\\u2019$ allows us to establish this novel relationship between the sufficiency conditions of $f$ and $f\\u2019$, which allow certifying the sufficiency of $f\\u2019$ rather than $f$ (much more efficiently).\\n\\n\\n\\n\\nThat being said, we agree with the reviewer\\u2019s initial observation that the greater challenge in obtaining the provable explanations discussed in this work lies not only in ensuring sufficiency but also in ensuring both sufficiency and *minimality* guarantees. Addressing this requires incorporating our novel \\u201cexplanation refinement\\u201d mechanism that relaxes abstraction constraints at varying levels, thereby enabling provable minimality, as demonstrated in Algorithm 2. This process indeed forms the main bulk of this work and allows us to generate significantly concise sufficient subsets even at notably coarse abstraction levels.\\n\\n\\nWe thank the reviewer again for highlighting this point, and we hope our answer has made this point clearer.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their strong support of our paper and for acknowledging its significant contribution to advancing the possibility of obtaining explanations with formal guarantees for neural networks.\\n\\nWe would like to highlight that we have included additional details about the mentioned experiments in the revised manuscript, along with other results requested by the reviewers (see general comment for more information).\\n\\nOnce again, we deeply appreciate your valuable feedback!\"}",
"{\"summary\": \"The paper proposes an abstraction-refinement technique to create provably sufficient and minimal explanations for neural network predictions. The method works by initially generating a smaller, abstract network through a neuron-merging technique. Neurons with similar behavior are combined, controlled by a reduction rate, which specifies the extent of abstraction. This smaller network allows faster computation of explanations, but if the explanation is insufficient, the network is iteratively refined by gradually adding neurons back until a satisfactory explanation is found.\\n\\nThe evaluation on datasets like MNIST, CIFAR-10, and GTSRB shows that this approach is more efficient in both computation time and explanation size than traditional verification-based methods. However, the method\\u2019s reliance on neural network verification may limit scalability, and its testing on only standard benchmarks raises questions about real-world applicability. Nonetheless, the paper\\u2019s contribution lies in using formal verification to ensure that the explanations are very sound and reliable, which is critical for safety-sensitive domains.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Introduces a novel abstraction-refinement framework integrated with neural network verification for generating provable explanations, a unique combination in the field.\\n2. Empirical results are robust, with detailed comparisons to baseline methods.\\n3. The paper is well-written with precise definitions and a logical flow that enhances its readability and understanding.\\n4. Addresses a critical challenge in XAI, making it possible to deploy more reliable AI systems in regulated environments.\", \"weaknesses\": \"1. The method\\u2019s reliance on neural network verification queries limits its scalability to real-world applications. The verification tools required here may not support the level of scalability needed for larger, state-of-the-art networks.\\n2. The paper primarily tests on MNIST, CIFAR-10, and GTSRB, standard benchmarks that do not adequately test the scalability and generalizability of the method. This narrow evaluation undermines claims of efficiency and limits insights into practical, diverse applications (such as regression problem). Including a challenging regression problem, such as the aircraft taxiing task evaluated by Wu et al., would provide stronger evidence of the method's scalability and applicability in high-stakes, continuous-output domains.\\n3. The abstraction process risks oversimplifying the neural network to a degree that explanations may lose meaningful detail, leading to explanations that are formally sufficient but practically uninformative.\\n4.The paper\\u2019s current evaluation lacks a comprehensive set of baselines, particularly from perturbation-based and gradient-based explainability methods. Including comparisons with these widely used XAI techniques would better contextualize the capabilities of the proposed abstraction-refinement approach.\", \"questions\": \"1. Could the authors provide further insights or potential solutions on how to extend the applicability of their method to more complex, state-of-the-art neural network architectures?\\n2. In the experimental setup, was the computational overhead of the abstraction-refinement process compared to traditional methods quantified beyond explanation size and time? A breakdown of this could enhance the paper's impact.\\n3. How does the method perform across different domains, such as vision versus text? Are there domain-specific challenges that might affect the sufficiency of explanations?\\n4. Have the authors considered evaluating their method on a complex regression problem, such as the aircraft taxiing task used in Wu et al.'s work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes to adopt an abstraction-refinement approach similar to distillation techniques for scaling up the extraction of provably sufficient explanations.\\nThis is a relevant research direction, as traditional methods have scalability issues.\\nResults confirm that the extracted explanations are indeed sufficient and also minimal, and the runtime shows great improvements compared to baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Providing provably sufficient explanations is a relevant problem, and developing methods to compute them efficiently is certainly of great interest. The results confirm the effectiveness of the proposed methodology\", \"The overall quality of the writing and presentation is very good\", \"The authors provided the source code with a good description of how to use it\"], \"weaknesses\": [\"Corollary 1 is essential for the overall message of the paper, but the proof is not self-contained and seems more like a straightforward application of the result of [1]. The authors should make the proof more self-contained, also clarifying how it builds on top of [1].\", \"The title suggests that the main contribution is a method providing provably sufficient explanations for neural networks. To my understanding, however, providing a provably sufficient explanation of an abstract model as per [1] is *fairly easy*, given that a sufficient explanation for any abstract model will also be sufficient for the original one. Nonetheless, this does not guarantee the minimality of the explanation, requiring the iterative refinement method proposed in Algorithm 2. I wonder, therefore, whether the main contribution lies in providing provably sufficient explanations, or in making a provably sufficient explanation also provably minimal.\", \"[1] Fully Automatic Neural Network Reduction for Formal Verification. Tobias Ladner and Matthias Althoff\"], \"questions\": [\"How hard is it to extend these results to arbitrary modifications of the complement, therefore not limiting to an epsilon-ball perturbation?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**Comparison to other formally provable types of explanations, other than sufficient explanations**\\n\\n\\nWhile we compare our approach to heuristic and formally provable methods for computing sufficient explanations, directly comparing it to other provable explanation types - like minimal contrastive explanations or exact Shapley values - is inherently unfair due to differing objectives. A fair comparison would require methods identifying a sufficient subset of input features, which our experiments already address comprehensively.\\n\\n\\nHowever, in response to reviewer comments, including ovvZ and t2CN, we will add experiments to evaluate our approach across additional comparable configurations: brute-force occlusion, varied feature orderings (descending importance in additive attributions), and feature selection within an additive framework. Initial results comparing our approach to brute-force occlusion showed that the average subset sufficiency of that approach is 19%, similar to heuristic methods such as Anchors and SIS, with an explanation size of 14.92 \\u00b1 8.64. In contrast, our method inherently guarantees 100% sufficiency. Thank you for highlighting this.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n**Information loss in the different abstraction levels may affect the explanation\\u2019s precision**\\n\\n\\nOur method produces significantly smaller networks, but the abstraction ensures that explanations that are provably sufficient for the reduced network *remain provably sufficient* for the original model, hence preserving the explanation\\u2019s precision.\\n\\n\\nHowever, it is true that a minimal sufficient explanation from the abstract model, while provably sufficient for the original model, may not remain provably minimal for it. This underscores the need for refinement. In a way, the explanation's size can indicate potential information loss at that level of abstraction, with larger explanations suggesting greater loss. However, we agree that traditional metrics like accuracy drop and KL divergence are valuable for measuring information loss. Comparing these to changes in explanation size offers an interesting perspective, which we will explore in an experiment in the final version.\\n\\n\\n\\n\\n\\n\\n\\n\\n[1] Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification (Wang et al., Neurips 2021)\\n\\n[2] Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound (Ferrari et al., ICLR 2022)\\n\\n\\n[3] Differentiable abstract interpretation for provably robust neural networks (Mirman et al., ICML 2018)\\n\\n\\n\\n\\n[4] First three years of the international verification of neural networks competition (VNN-COMP) (brix et al., STTT 2023)\\n\\n\\n[5] Verix: Towards verified explainability of deep neural networks (Wu et al, Neurips 2023)\\n\\n\\n\\n[6] Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation (Huang et al., KR 2024)\\n\\n\\n[7] Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks (Bassan et al., TACAS 2023)\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"We thank the reviewer for the insightful comments. Our responses are provided below.\\n\\n**Extension to other non-image model types such as language models, and tabular data**\\n\\n\\nOur method is data-type agnostic and can be applied to other model types, such as those handling text or tabular data. We chose to focus on vision tasks due to their suitability for visualizing results and the computational challenges they present, given the relatively large input space. A key consideration in language tasks is that an $\\\\epsilon$ perturbation may significantly alter the semantic meaning of the input, making it less meaningful compared to vision tasks. An alternative approach is to apply $\\\\epsilon$ perturbations within a latent space. \\n\\n\\nWe conducted an initial experiment using our method on the SafeNLP language task from the annual Neural Network Verification Competition [5]. This task, the only language-focused challenge in the competition, is based on the medical safety NLP dataset. Certification with respect to $\\\\epsilon$ was achieved over an embedded representation of the input space, allowing for meaning-preserving perturbations. The experiment produced provably minimal and sufficient explanations, achieving a computation time of 0.66 \\u00b1 0.22 seconds and an explanation size of 6.67 \\u00b1 5.06. We appreciate the reviewer's suggestion to explore this direction and will include a comprehensive experiment in the final version of our paper to demonstrate our method's applicability to this benchmark.\\n\\n\\nAnother possible extension of our method is to regression tasks and not just classification. The provable guarantee here would be to satisfy that fixing the subset $S$ determines that the prediction remains within some $\\\\delta$ range. Following a suggestion by reviewer eaKy, we demonstrated this on the real-world, safety-critical Taxi-Net benchmark employed by the work of Wu et al. [5]. The results demonstrate that our method produces explanations for this setting with an average computation time of 35.71 \\u00b1 3.71 seconds and an average explanation size of 699.30 \\u00b1 169.34, marking a significant improvement over the outcomes reported by Wu et al.\\n\\n\\n\\n\\nWe appreciate the reviewer for bringing up this point and will incorporate both of these experiments, demonstrating potential extensions of our method, into the final version.\\n\\n**Scaling the framework to larger architectures**\\n\\n\\nObtaining provable sufficient explanations indeed depends on neural network verification queries, which currently constrain the scalability of this approach for SOTA architectures. However, with rapid advancements in verification techniques [1 - 4, and as explained below], ongoing improvements in scalability will directly enhance the applicability of our method. Compared to other methods addressing the same task of providing provably sufficient explanations for neural network predictions [5 - 7], our abstraction-refinement approach is significantly more scalable. This is demonstrated not only by the substantial efficiency improvements highlighted in the experiments section but also by the generation of explanations over much larger models. For instance, our approach handles notably larger models compared to recent work providing provably sufficient explanations (Wu et al., NeurIPS 2023) [5], which follows a \\\"traditional\\\" methodology (i.e., it does not leverage our abstraction-refinement technique).\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n**Providing background on neural network verification**\\n\\n\\nNeural network verification is an active research area (e.g., [1\\u20134]), with recent advancements in scaling methods like branch-and-bound [1,2,4] and abstract interpretation [3,4]. Our neuron-merging abstraction method is verification-agnostic, enhancing scalability in a complementary way. In the final version, we will include a detailed overview of recent advancements to improve accessibility. Thank you for highlighting this.\"}",
"{\"summary\": \"This paper introduces an abstraction-refinement approach to efficiently generate provably sufficient explanations for neural network predictions. Traditional formal explainability methods are computationally intensive and struggle with scalability on larger models. This method simplifies the neural network by creating a smaller, abstract version, which speeds up the computation of explanations. If the explanation is insufficient, the network is gradually refined until a minimal, sufficient subset of features is identified.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a unique abstraction-refinement technique, effectively bridging the gap between interpretability and scalability in neural network explanations. This approach is innovative in the realm of provable explanations.\\n2. Unlike many heuristic-based explainability methods, this technique provides formal guarantees for explanation sufficiency, which is highly valuable in safety-critical applications requiring reliability in interpretability.\", \"weaknesses\": \"1. The paper primarily demonstrates results on relatively simple models and standard datasets. Testing on more complex architectures (e.g., deep CNNs or Transformer-based models) would strengthen the claims regarding scalability and broader applicability.\\n2. While the abstraction-refinement approach reduces computational load, it is still constrained by the scalability limits of neural network verification. The authors could address this limitation by discussing ongoing advancements in verification techniques and how they might enhance this method.\\n3. The comparison focuses primarily on heuristic-based methods (e.g., Anchors and SIS) but lacks depth regarding alternative formal explanation methods. Adding comparisons with other provable techniques would provide a more comprehensive evaluation.\\n4. The abstraction process may lead to information loss, which could affect the explanation's precision or fidelity. The paper could benefit from a more in-depth analysis of the trade-offs between explanation minimality and information retention across abstraction levels.\", \"questions\": \"1. How well does the abstraction-refinement approach scale to more complex architectures, such as Transformers or deeper CNNs, beyond the datasets tested? Can the authors provide insights or preliminary results on its performance with larger models?\\n2. The paper mentions that abstraction reduces the network size, potentially losing information. How does this information loss impact the quality or trustworthiness of the explanations? Could the authors quantify or analyze this trade-off?\\n3. The experiments focus on image datasets. How does the approach generalize to other types of data, such as time-series, tabular data, or text? Are any modifications to the method necessary for non-image data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We thank the reviewer for raising their score to an 8 and for encouraging us to improve the clarity of our paper further.\\n\\nWe fully agree with the reviewer\\u2019s suggestion that incorporating illustrations to clarify the process of obtaining explanations for abstract networks will enhance the clarity of some concepts discussed in our paper. We will include these visualizations in the final version.\\n\\nThank you once again for the valuable feedback!\"}",
"{\"comment\": \"Dear Reviewers,\\n\\nThank you once again for your insightful feedback and for recognizing the significance of our work. We have addressed specific additional experimental and theoretical results in individual discussion threads. For your convenience, we have also incorporated these updates into the revised manuscript, including:\\n\\n1. We conducted an additional experiment to evaluate our method on the safeNLP benchmark from the annual neural network verification competition [1] and the real-world regression Taxinet benchmark, previously studied by Wu et al. ([2], NeurIPS 2023). These experiments were carried out to demonstrate the applicability of our method to additional domains, following the suggestions from reviewers eaKy and ovvZ (attached in Appendix D).\\n\\n2. Further ablation studies on varying $\\\\epsilon$ perturbations to better illustrate sufficiency-scalability trade-offs, as suggested by Reviewer t2CN (attached in Appendix D).\\n\\n3. Additional ablations of our results based on varying feature orderings, as recommended by Reviewer t2CN (attached in Appendix D).\\n\\n4. A complete and more rigorous refinement of the proof for Corollary 1, as suggested by Reviewer dQgK (attached in Appendix A).\\n\\nAdditional detailed responses are provided in the respective threads. We sincerely appreciate your valuable feedback and are happy to discuss any further points.\\n\\n[1] First three years of the international verification of neural networks competition (VNN-COMP) (brix et al., STTT 2023)\\n\\n[2] Verix: Towards verified explainability of deep neural networks (Wu et al, Neurips 2023)\"}",
"{\"comment\": \"**Did you compare abstraction-refinement to traditional methods using metrics other than time and size?**\\n\\n\\nOur primary evaluation method builds on prior works addressing sufficiency-based explanations [1,2,3], focusing primarily on the most common metrics: computation time and explanation size. Additionally, we present a detailed analysis that evaluates our results across different levels of abstraction, highlighting how computation time and explanation size are influenced by the level of abstraction. \\n\\n\\nFurthermore, in response to reviewer feedback and suggestions, we will include in our final review: (1) an analysis of how varying $\\\\epsilon$ impacts the sufficiency of the produced subset, providing a more fine-grained understanding of both our method and heuristic approaches, along with their efficiency and explanation size, (2) an examination of how different feature orderings influence the resulting explanation, and (3) an evaluation of information loss across various abstraction levels.\\n\\n\\nWe ran a preliminary experiment over MNIST with small $\\\\epsilon$ perturbations to demonstrate their impact on computation time and explanation size:\\n\\n\\n\\n\\n| Perturbation Radius | Explanation Size | Computation Time |\\n|---------------------|------------------|-------------------|\\n| 0.012 | 219.450 \\u00b1 142.228 | 110.111 \\u00b1 33.712 |\\n| 0.011 | 186.970 \\u00b1 140.435 | 101.881 \\u00b1 41.625 |\\n| 0.010 | 153.240 \\u00b1 135.733 | 94.897 \\u00b1 46.244 |\\n| 0.009 | 119.040 \\u00b1 127.271 | 81.889 \\u00b1 52.578 |\\n| 0.008 | 87.530 \\u00b1 113.824 | 62.607 \\u00b1 58.084 |\\n| 0.007 | 59.420 \\u00b1 95.607 | 53.072 \\u00b1 56.709 |\\n\\n\\nWe thank the reviewer for bringing up this point and will incorporate the full experiment on additional benchmarks, along with the other mentioned evaluations, in the final version.\\n\\n\\n [1] Verix: Towards verified explainability of deep neural networks (Wu et al, Neurips 2023)\\n\\n[2] Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation (Huang et al., KR 2024)\\n\\n\\n[3] Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks (Bassan et al., TACAS 2023)\\n\\n[4] Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification (Wang et al., Neurips 2021)\\n\\n\\n[5] First three years of the international verification of neural networks competition (VNN-COMP) (brix et al., STTT 2023)\"}",
"{\"comment\": \"Dear Reviewer ovvZ,\\n\\n\\nThank you once again for your detailed and insightful feedback, which has been instrumental in identifying areas where our paper could be further clarified.\\n\\n\\nWe are glad to have addressed your concerns and appreciate that you are considering increasing your score. Should you have any additional questions, we would be more than happy to address them.\\n\\n\\nBest regards,\\n\\n\\nThe Authors\"}",
"{\"title\": \"Feedback on Author Response\", \"comment\": \"Thank you for your answer. I appreciated your new experiments on the increasing radius. I have one final doubt regarding your second answer. I would appreciate it if you could clarify the following point.\\n\\n> [...] The first challenge lies in recognizing that a neuron-merging abstraction technique, such as the one discussed by Ladner et al., can be used to abstract a neural network while ensuring that a provably sufficient explanation for the abstract model remains provably sufficient for the original model [...]\\n\\nWhat do you mean, practically, by \\\"recognizing\\\"? It seems more of a motivation behind the paper's idea rather than a key contribution to the paper. My concern is further supported by the fact that Corollary 1 appeared more like a straightforward application of the result of [1], without highlighting the contribution provided by the authors. Maybe, uploading your revised version of Corollary 1, where you mentioned that you clarified this, would help.\\n\\nThank you again for your answer.\"}",
"{\"comment\": \"We appreciate the reviewer\\u2019s valuable and thoughtful comments. Please find our responses below.\\n\\n\\n**The practicality of the approach and exploring potential trade-offs between sufficiency and runtime.**\\n\\n\\nCertifying the provable sufficiency of an explanation depends on the use of a neural network verifier, which currently poses scalability challenges. However, advancements in this area are rapid [1,2], and our approach significantly outperforms competing methods for the same task [e.g., 3 - Wu et al., Neurips 2023], including substantial improvements in runtime (as highlighted in the experiments section) and in handling significantly larger models. While methods such as SIS and Anchors are generally more scalable since they do not rely on verifiers, they cannot guarantee provable sufficiency. As shown in the experiments section, fewer than 20% of the subsets generated by these methods were sufficient, whereas our method consistently produces provably sufficient subsets by design.\\n\\n\\n\\n\\nHowever, we do agree with the reviewer that discussing certain trade-offs is worthwhile, and we will include a thorough discussion of this in our final draft. Here is a brief overview:\\n\\n\\n**Sufficiency-runtime tradeoff:** A potential sufficiency-scalability trade-off lies in the size of the $\\\\epsilon$ ball for explanation verification. Smaller $\\\\epsilon$ balls improve scalability but limit sufficiency verification to narrower domains. We performed an initial MNIST experiment with small $\\\\epsilon$ perturbations, which showed how this reduction impacts computation time.\\n\\n\\n\\n\\n| Perturbation Radius | Explanation Size | Verification Time |\\n|---------------------|------------------|-------------------|\\n| 0.012 | 219.450 \\u00b1 142.228 | 110.111 \\u00b1 33.712 |\\n| 0.011 | 186.970 \\u00b1 140.435 | 101.881 \\u00b1 41.625 |\\n| 0.010 | 153.240 \\u00b1 135.733 | 94.897 \\u00b1 46.244 |\\n| 0.009 | 119.040 \\u00b1 127.271 | 81.889 \\u00b1 52.578 |\\n| 0.008 | 87.530 \\u00b1 113.824 | 62.607 \\u00b1 58.084 |\\n| 0.007 | 59.420 \\u00b1 95.607 | 53.072 \\u00b1 56.709 |\\n\\n\\n\\n\\nIn the final draft, we will provide a thorough analysis of this tradeoff, determined by the $\\\\epsilon$ perturbation, across all our benchmarks.\\n\\n\\n\\n\\n**Minimality-runtime tradeoff:** Another possible trade-off concerns minimality. Our approach shows that provably sufficient explanations can be achieved at varying levels of abstraction. Even with low reduction rates (e.g., reducing 90% of non-linear activations), many input dimensions can be verified, yielding a concise subset. Users can halt the abstraction-refinement process early, generating a provably sufficient (though non-minimal) explanation with significantly higher efficiency - requiring only ~10% of the computation time compared to the full algorithm. We thank the reviewer for raising this and will highlight it in the final version.\\n\\n\\n**Methods for abstraction in additive attributions and occlusion-based approaches should be referenced**\\n\\n\\nWe acknowledge the reviewer's emphasis on the importance of addressing methods, such as the notable work by Ancona et al., which employ probabilistic or uncertainty-based abstractions of neural networks to constrain additive attribution fidelity. However, we note that while these works focus on *probabilistic* guarantees, our method provides a much stronger, strict formal guarantee over entire continuous domains. To our knowledge, this is the first abstraction algorithm over neural networks to offer explanations with such guarantees. Our method introduces additional novelty through the refinement component, which progressively relaxes the abstraction constraints, and hence also ensures provable minimality guarantees.\\n\\n\\n\\n\\nSimilarly, we acknowledge that occlusion-based methods, such as those by Zeiler et al., are also relevant for their focus on feature subsets, and we will discuss them as related work. However, we do note a fundamental difference between occlusion-based methods and the formal guarantees provided by our approach. Specifically, occlusion methods typically rely on a fixed auxiliary baseline for the occluded complement, whereas our method ensures the strict sufficiency of the derived subset across an entire continuous domain, offering significantly stronger formal guarantees.\"}",
"{\"comment\": \"Thank you for your extensive and thoughtful response.\\n\\nI highly appreciate the discussion with respect to the trade-off, and the added\\nthe experiment with the decreasing $\\\\epsilon$ perturbation radius. I am happy\\nto read that the approach allows to trade runtime for explanation quality.\\nI also acknowledge the fundamental difference to the works by Ancona et al. and\\nZeiler et al. differ in the formal guarantees presented in this manuscript,\\nwhich nonetheless I appreciate the authors to discuss. Further, I am looking forward\\nto see the full results for brute-force occlusion. Although the low sufficiency\\nwas to be expected, I appreciate the addition as a fairly intuitive baseline.\\n\\nBy also including the other reviews and respective replies, I feel confident in\\nraising my score to 8.\"}",
"{\"comment\": \"We thank the reviewer for the valuable comments. See our response below.\\n\\n**Corollary 1 is not entirely self-contained**\\n\\n\\nWe thank the reviewer for highlighting this point. Given the significance of Corollary 1, we agree that making the proof more self-contained, building on the work of Ladner et al., would enhance the paper's overall clarity and accessibility. We will incorporate this change in the final version.\\n\\n\\n**Paper title implies that the contribution focuses on subset sufficiency, but proving minimality appears more challenging**\\n\\n\\nThe core idea behind proving that our abstraction-refinement strategy achieves a provably sufficient and minimal explanation is divided into two key challenges. The first challenge lies in recognizing that a neuron-merging abstraction technique, such as the one discussed by Ladner et al., can be used to abstract a neural network while ensuring that a provably sufficient explanation for the abstract model remains provably sufficient for the original model. As the reviewer rightly pointed out, the second significant challenge is applying such an abstraction method to derive explanations that are not only provably sufficient but also provably minimal with respect to the original model. While a sufficient explanation of the abstract model remains sufficient for the original model, it does not necessarily ensure provable minimality. To address this, we introduce the concept of a \\u201cprovably refined network\\u201d component, enabling the gradual refinement of both the neural network and its explanation. This process is highlighted in Algorithm 2 of the paper. We agree with the reviewer that emphasizing the \\u201cminimality\\u201d aspect of our provable explanations in the paper title could enhance clarity and better reflect the contributions of the work. Thank you for bringing this to our attention.\\n\\n\\n**Extending the results to arbitrary modifications of the complement** \\n\\n\\nExtending the results to arbitrarily large modifications of the complement can be obtained by expanding the bounds over the input features during the certification process. Since our method is agnostic to these bounds, this extension is naturally feasible. Regarding scalability, on the one hand, each verification query becomes more computationally demanding as it involves certifying a much larger domain. On the other hand, this broader setting generates larger explanations, due to the increased likelihood of subsets failing to maintain sufficiency, and since fewer verification queries are required, the overall runtime may not increase significantly. However, it is worth mentioning that since the obtained subsets may be significantly large, the generated explanations may be less meaningful. To illustrate this, we conducted an initial experiment on MNIST, comparing perturbations within a small domain of 0.01 to those in the entire [0,1] domain. The results are as follows:\\n\\n\\n\\n\\n\\n\\n| Perturbation Radius | Explanation Size | Generation Time |\\n|---------------------|------------------|-------------------|\\n| 0.01| 153.240 \\u00b1 135.733 | 94.897 \\u00b1 46.244 |\\n| [0, 1] | 768.970 \\u00b1 5.788 | 155.780 \\u00b1 2.779 |\\n\\n\\nBuilding on this remark and the questions raised by reviewer t2CN, we will include an additional analysis in our paper. This analysis will evaluate our method's performance under varying $\\\\epsilon$ perturbations across all benchmarks, offering insights into the trade-offs between sufficiency and runtime. An initial experiment on MNIST yielded the following results:\\n\\n\\n| Perturbation Radius | Explanation Size | Computation Time |\\n|---------------------|------------------|-------------------|\\n| 0.012 | 219.450 \\u00b1 142.228 | 110.111 \\u00b1 33.712 |\\n| 0.011 | 186.970 \\u00b1 140.435 | 101.881 \\u00b1 41.625 |\\n| 0.010 | 153.240 \\u00b1 135.733 | 94.897 \\u00b1 46.244 |\\n| 0.009 | 119.040 \\u00b1 127.271 | 81.889 \\u00b1 52.578 |\\n| 0.008 | 87.530 \\u00b1 113.824 | 62.607 \\u00b1 58.084 |\\n| 0.007 | 59.420 \\u00b1 95.607 | 53.072 \\u00b1 56.709 |\\n\\n\\nIn the final version, we will include the complete experiment across all benchmarks. Thank you for highlighting this point.\"}",
"{\"comment\": \"We appreciate the reviewer\\u2019s valuable comments. See our responses below.\\n\\n**Reliance on neural network verification and extension to additional benchmarks and domains, such as Taxinet**\\n\\n\\nWe agree and acknowledge that any approach seeking to deliver provably sufficient explanations is naturally limited by the scalability challenges of neural network verifiers [1,2,3]. Nonetheless, progress in this field is advancing rapidly [4,5], and as the reviewer noted, our abstraction-refinement methodology provides a substantial improvement in the capability of using such methods to obtain provable explanations for neural networks. Additionally, we note that the benchmarks used in our experiments are more than one order of magnitude larger than those utilized in the notable work by Wu et al. (NeurIPS 2023, [1]), highlighting the improvements of our method in this regard as well.\\n\\n\\nAdditionally, similar to Wu et al., our framework can be adapted to regression tasks by verifying whether fixing a subset $S$ keeps the prediction within a specified $\\\\delta$ range. Based on the reviewer's suggestions, we conducted an experiment using our approach on the TaxiNet benchmark. The results show that our method generates explanations for this setting with an average computation time of 35.71 \\u00b1 3.71 seconds and an average explanation size of 699.30 \\u00b1 169.34, representing a significant improvement compared to the results reported by Wu et al. We appreciate the reviewer\\u2019s suggestion to extend our results for this benchmark and will include a detailed analysis in the final version.\\n\\n\\n\\n\\nFollowing another suggestion from the reviewer, we evaluated our method on the SafeNLP language task from the annual Neural Network Verification Competition [5]. This task, the only language task in the contest, is trained on the medical safety NLP dataset. The $\\\\epsilon$ certification is achieved over an embedding of the input space, enabling meaning-preserving perturbations. Our experiments yielded the following results: an average computation time of 0.66 \\u00b1 0.22 seconds and an average explanation size of 6.67 \\u00b1 5.06. We appreciate the reviewers' suggestions and will incorporate an extended experiment on this benchmark in the final version to demonstrate the applicability of our method across additional domains.\\n\\n\\n\\n\\n\\n\\n**Abstraction risks oversimplifying models to a degree where explanations are uninformative**\\n\\n\\nWe agree that this might partially occur at very low reduction rates, where the explanations remain provably sufficient at each abstraction level for the original model but are not necessarily provably minimal. However, our evaluations demonstrate that even at slightly higher reduction rates - still considerably smaller than the original network (such as around 20%-30% of the non-linear activations) - the provided explanations closely match those generated for the original model. This highlights minimal information loss in this context even at moderately higher reduction rates. This characteristic is, in fact, the core insight enabling our method to greatly enhance explanation generation efficiency, as most features are validated within coarser abstractions.\\n\\n\\n**Comparison to additional XAI benchmarks**\\n\\n\\nTo ensure a fair comparison, our generated explanations should be evaluated against methods that also aim to identify subsets intended to be sufficient. This stands in contrast to most existing XAI techniques, such as classic additive feature attribution methods or many gradient-based methods, which do not focus on obtaining sufficient subsets and are therefore not directly comparable. Within this context, we believe that the comparisons in our work are thorough, as they include evaluations against the two most prominent heuristic approaches as well as traditional provable methods.\\n\\n\\n\\n\\n\\n\\nHowever, in response to the reviewer\\u2019s comment, as well as the suggestions from reviewers t2CN and ovvZ, we will incorporate additional experiments to evaluate our approach across additional configurations. These experiments will include brute-force occlusion, provable explanations with varying feature orderings (based on descending importance in different additive attributions), and feature selection within an additive feature attribution framework. Preliminary results comparing our approach to brute-force occlusion revealed that the average subset sufficiency of the latter is 19%, comparable to heuristic methods like Anchors and SIS, with an explanation size of 14.92 \\u00b1 8.64. In contrast, our method naturally ensures 100% sufficiency.\"}",
"{\"summary\": \"The work introduces an approach to provide \\\"provably minimally sufficient\\nexplanations\\\", based on \\\"abstraction refinement\\\". Provably minimally\\nsufficient explanations are defined as the minimal set of features required to\\nunder which the model produces the same prediction. Abstraction\\nrefinement is a method from model checking and verification, which reduces the\\ncomplexity of the model while keeping the prediction (winning class) constant,\\nsomewhat similar to model pruning. The work includes a formal proof motivating\\nits approach, and some empirical experiments analyzing the runtime and\\nidentified number of minimal features, as well as a comparison to two similar\\napproaches with respect to sufficiency and runtime.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The work is clearly written.\", \"The proposed approach is well-motivated.\", \"The work makes the limitations of its approach clear.\", \"The application of abstraction-refinement from the area of model verification\", \"to feature explanation in an occlusion-based context is original.\"], \"weaknesses\": \"- Significance: The practicality of the approach is severely limited by is long\\n runtime (up to 3 hours for a single attribution mask). This issue could be\\n alleviated by discussing trade-offs, especially considering the approaches in\\n Table 2. Comparing with these methods, SIS and Anchors, there is likely some\\n optimal trade-off between sufficiency and runtime that would be valuable to\\n analyze.\\n\\n- Contribution: The identified explanations may not necessarily be unique. A purely additive\\n model with some threshold could have any combination of inputs, as long as\\n the threshold is passed (i.e., the prediction does not change). Due to this,\\n the identified feature attributions might not necessarily present all\\n features relevant for the prediction, but rather only a subset thereof.\\n A discussion of the uniqueness, and the issue of the order of removing the\\n features, would be very valuable.\\n\\n- Novelty: There is a plethora of approaches (see, e.g., Ancona et al., 2019 for\\n approximations of Shapley values) that assign relevance to features (somewhat\\n different to choosing feature subsets) with this issue without constraining\\n the sufficiency (i.e., the fidelity) of the model directly. These mostly\\n avoid computing the Occlusion (see Zeiler 2014), which observes the\\n prediction under removal of individual features, due to its infeasible\\n runtime. The approach presented is very similar to occlusion-based\\n approaches, as the model is reduced in order to occlude parts of the input.\\n This is an important body of related work to discuss.\", \"references\": \"Ancona, M., Oztireli, C., & Gross, M. (2019, May). Explaining deep neural\\nnetworks with a polynomial time algorithm for shapley value approximation. In\\nInternational Conference on Machine Learning (pp. 272-281). PMLR.\\n\\nZeiler, M. D. (2014). Visualizing and Understanding Convolutional Networks. In European conference on computer vision/arXiv (Vol. 1311).\", \"questions\": [\"Did you consider some trade-off of sufficiency versus runtime?\", \"How do you solve the issue of uniqueness of the relevant feature set?\", \"How does this work compare to \\\"brute-force\\\" occlusion?\", \"How did you verify the sufficiency for the heuristics-based approaches?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Final reviewer comment\", \"comment\": \"Thanks for the clarification.\\n\\nOverall, I'm happy with the discussion with the authors, as they addressed all my concerns and showed that the discussion was indeed useful in clarifying some points that were not clear before. I also checked other reviewers' comments, and I found myself in line with their opinions. I also double-checked the paper and as a final comment, I kindly ask the authors to clarify a bit more the paragraph Abstract Neural Networks in lines 210+, as I found it difficult to intuitively visualize the domain of the abstract network, which is defined over intervals instead of the real plane.\\n\\n\\nI lean toward increasing my score to 8 as a sign of appreciation. However, I encourage the AC to take my recommendation cum grano salis, as I pointed out that formal verification is not my area of expertise. \\n\\n\\nThat said, best of luck with your submission!\"}"
]
} |
1HQZ4QFWi8 | Aligning Large Language Models via Self-Steering Optimization | [
"Hao Xiang",
"Bowen Yu",
"Hongyu Lin",
"Keming Lu",
"Yaojie Lu",
"Xianpei Han",
"Le Sun",
"Jingren Zhou",
"Junyang Lin"
] | Automated alignment develops alignment systems with minimal human intervention.
The key to automated alignment lies in providing learnable and accurate preference signals for preference learning without human annotation.
In this paper, we introduce Self-Steering Optimization ($SSO$), an algorithm that autonomously generates high-quality preference signals based on predefined principles during iterative training, eliminating the need for manual annotation.
$SSO$ maintains the accuracy of signals by ensuring a consistent gap between chosen and rejected responses while keeping them both on-policy to suit the current policy model's learning capacity.
$SSO$ can benefit the online and offline training of the policy model, as well as enhance the training of reward models.
We validate the effectiveness of $SSO$ with two foundation models, Qwen2 and Llama3.1, indicating that it provides accurate, on-policy preference signals throughout iterative training.
Without any manual annotation or external models, $SSO$ leads to significant performance improvements across six subjective or objective benchmarks.
Besides, the preference data generated by $SSO$ significantly enhanced the performance of the reward model on Rewardbench.
Our work presents a scalable approach to preference optimization, paving the way for more efficient and effective automated alignment. | [
"LLM",
"Alignment",
"Automated alignment"
] | https://openreview.net/pdf?id=1HQZ4QFWi8 | https://openreview.net/forum?id=1HQZ4QFWi8 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zyV40HBtUI",
"g3PzUYrlTS",
"cAzRngUA2f",
"ZmtAmjNncz",
"2Cg1na3X07"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730704665795,
1730695716730,
1730700047058,
1734348386013,
1730497505323
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9735/Reviewer_RsyP"
],
[
"ICLR.cc/2025/Conference/Submission9735/Reviewer_EXrW"
],
[
"ICLR.cc/2025/Conference/Submission9735/Reviewer_nne3"
],
[
"ICLR.cc/2025/Conference/Submission9735/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9735/Reviewer_e9XL"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes an auxiliary additive \\u201cself-steering\\u201d loss for iterative preference optimization algorithms (e.g. iterative IPO, DPO) for LLM alignment. This self-steering term is inspired from the principle-based alignment literature, and is designed to maintain a distinguishable gap between positive and negative responses despite sampling them on-policy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper considers a very interesting idea of introducing principle-based methods into preference optimization algorithms such as DPO and IPO. Such methods, especially their iterative versions, have substantial drawbacks as identified by section 1 of this paper, and addressing them would go a long way in achieving scalable and efficient LLM alignment.\", \"weaknesses\": \"I found the paper to be weak in the following aspects:\\n1. **Experimental results.** Many of the improvements of the method seem very incremental, or within noise (Table 1). Seeing these results, I'm not convinced that this method offers noticeable improvements over existing baselines.\\n2. **Clarity.** The paper structure and writing were lacking in several areas (see below), and I found the method to be explained poorly despite its simplicity. In particular, the loss term could be explained and motivated much better in section 2.3.\", \"questions\": \"1. Related work (2.1) should be its own section preceding section 2.\\n2. Should not use theta for loss weight (since it\\u2019s commonly used to refer to policy parameters).\\n3. The problem setting is not clearly defined - should be defined in the beginning of section 2 or its own section.\\n4. Line 199/200 - what does this backdoor refer to? This needs to be more clearly explained.\\n5. No error bars are given in results. This is particularly because many of the results show little difference between SSO and the baselines. \\n6. GSM8K iter1 of SSO seems misbolded in Table 1 - it is lower than modified PBAA iteration 1.\\n7. I would argue all MATH and GSM8K (Table 1) results are within noise. AE2 is also marginal (15.0, vs 14.9 for PBAA iteration 2).\\n8. Understanding why PBAA AE2 drops significantly would be an interesting contribution.\\n9. A good ablation would be simply removing the self-steering term (and keeping the WPO-inspired term) to understand its impact.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces Self-Steering Optimization (SSO), an method used to align LLMs with minimal human intervention. SSO autonomously generates on-policy preference signals to guide the training of policy models without the need for manual annotation. This approach leverages predefined contrastive principles during iterative training to maintain a consistent quality gap between chosen and rejected responses. The paper validates SSO using two foundation models, Qwen2 and Llama3.1, showcasing significant performance gains across both subjective and objective benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper provides extensive benchmarking of the method and with additional experiments proving the robustness of the method.\", \"As the paper touched upon, the method can be extended to other loses that is not IPO-based which makes it more flexible.\", \"The method reduces reliance on costly human annotations, paving the way for more scalable training processes.\"], \"weaknesses\": [\"The current design of the self-steering and weight functions is simplistic as mentioned in the limitations.\", \"The writing is a unclear at times and things in the method section could afford some more clarity. Especially reasoning about how your method solves your fundamental problem. Right now it's offered as a solution without going into details how.\", \"It's unclear what the author means with \\\"Expectations\\\" at section 2.3.\", \"Overall, a plan on how you will improve the clarity of the introduction where you should clearly state the problem and then how your method mend this problem would go a long way.\"], \"questions\": [\"How would your method scale with smaller models?\", \"How does SSO handle scenarios where human-like feedback is ambiguous or lacks clear contrastive principles?\", \"Due to no responses before the deadline I am now lowering my score\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a novel method called Self-Steering Optimization (SSO) for automated large language models (LLMs) alignment. SSO autonomously generates high-quality preference signals based on predefined principles to aid in preference learning. The authors also propose a new optimization objective based on WPO and IPO. The method demonstrates effectiveness on Qwen2 and Llama3.1 models across multiple benchmarks compared to SFT models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The automated alignment process reduces dependence on human annotations, making model alignment more scalable and cost-effective.\\n2. The results of SSO demonstrates improvements on benchmarks for both subjective and objective tasks with effective length control.\\n3. SSO can be extended to other base loss functions, e.g., IPO and DPO.\", \"weaknesses\": \"1. The improvement of this paper is mainly based on two categories, the synthetic data and the new objective. However, in the experiments the authors do not separate them well to state their effectiveness.\\n2. The clarity of this paper is not enough. The authors should provide more background on previous methods like WPO. The notations are also unclear. For example, $p^+, p^-$ from $\\\\mathcal{G}$ defined in Equation (1) do not appear in following contents. Meanwhile, in Section 2.3, the authors introduce multiple QA pairs for their objective without well explaining their expectations. \\n3. The SFT baseline is based on basic data rather than the synthetic data. DPO/IPO with SSO data is also not compared.\", \"questions\": \"1. Can you show the improvement of SSO from the generative data and the proposed optimization objective separately?\\n2. Can you further explain why using $y^-$ in Equation (2) will cause a bookdoor problem? In Equation (3), why should $x^-$ prefer $y^O$ over $y^+$?\\n3. Why do you choose different base models in the experiments, e.g., the pretrained model, instruct model, and also SFT model (from Table 3)? Is the SFT model the baseline from previous experiments?\\n4. In Figure 4 (a), why can we see \\\"IPO caused a gradually decreased accuracy\\\" since both the optimization methods and the data are different?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper introduces Self-Steering optimization, a preference finetuning method that automatically genreates perferences using contrastive pairs. SSO uses a combination of losses on automatically generated data to finetune an LLM.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"this paper tackles an important problem -- namely improving efficiency in the generation of alignment data.\", \"the paper does evaluation across a large number of benchmarks and sceniors (though the methodology and reasoning behind them is questionable, see weaknesses.)\"], \"weaknesses\": \"**Writing**\\n\\nThe paper is a bit hard to approach without proper background, but htis is not provided by the paper. In several places notation is not proerly defined. See the \\\"questions\\\" section as well.\\n\\n* I understand that the authors build on top of contrastive principles, but given an overview of this seems like necesary background.\\n* More intuition on the individual loss terms is necessary.\\n* Several grammar mistakes which should be corrected.\\n\\nThere are several unclear sentences / phrases in the paper. At present, I do not believe the writing passes the bar for publication. At the end of reading the paper, it is a bit unclear why the authors chose the specific losses / formulations used. \\n\\n**Delta Versus Prior work** \\nIt's unclear to me what the delta versus prior work is. Granted, I am not extermely familiar with principle based alignment. However, the authors do not do a good job articulating the differences between SSO and other methods. The closest they come to doing so is at the end of Section 2.1 where it is stated that \\\"Additional inputs, such as principles, could lead to insufficient... we propose SSO to address these limitations\\\"\\n\\nWhat part of SSO is different than prior work? I assume prior work has the contrastive principle sampling? Is the difference then just the on-policy weighting function W? Why is this important? This also seems to be taken from WPO.\\n\\n\\n**Experiments**\\nThe experiments section is not clearly written enough for me to discern what conclusions should be made. After reading the work, I was left with several questions about the methodology and presentation of the experiments:\\n* The Modified PBAA baseline is never defined. \\n* it doesn't make sense to me that the authors use ultra-feedback for training, but evaluate on MMLU-Pro and math. How does alignment influence math performance? \\n* Several of the results do not compare to baselines, and only present results for SSO. This includes Table 3 and Table 4\", \"questions\": [\"Questions on teh writing in the draft:\", \"Several terms are not properly defined. What are principles $p^+$ ad $p^-$. Why are there only two of them?\", \"What is $y^0$ and where does it come from?\", \"How does $x^+$ relate to $p^+$.\", \"Several ambiguous terms. What does \\\"accurate signal\\\" mean?\", \"What does \\\"We also designed a W for learnable signals\\\" mean?\"], \"questions_on_the_method\": [\"Could the authors be precise about what the delta is versus prior work? I pose this question in more detail in the weaknesses section.\"], \"questions_on_experiemnts\": [\"The Modified PBAA baseline is never defined. What is it?\", \"Why do we evaluate alignment methods on benchmarks like MMLU-Pro and math? Looking at the appendix, the alignment principles often have nothing to do with these benchmarks, yet they are the core means of evaluation. How can we know how helpful SSO is for alignment if the reported benchmarks are not actually concerned with alignment.\", \"Why should we be able to compare PBAA-based methods and Ultrafeedback? It seems like these are just totally different datasets. Could the authors explain this?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1HCN4pjTb4 | Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse | [
"Arthur Jacot",
"Peter Súkeník",
"Zihan Wang",
"Marco Mondelli"
] | Deep neural networks (DNNs) at convergence consistently represent the training data in the last layer via a geometric structure referred to as neural collapse. This empirical evidence has spurred a line of theoretical research aimed at proving the emergence of neural collapse, mostly focusing on the unconstrained features model. Here, the features of the penultimate layer are free variables, which makes the model data-agnostic and puts into question its ability to capture DNN training. Our work addresses the issue, moving away from unconstrained features and studying DNNs that end with at least two linear layers. We first prove generic guarantees on neural collapse that assume \emph{(i)} low training error and balancedness of
linear layers (for within-class variability collapse), and \emph{(ii)} bounded conditioning of the features before the linear part (for orthogonality of class-means, and their alignment with weight matrices). The balancedness refers to the fact that $W_{\ell+1}^\top W_{\ell+1}\approx W_\ell W_\ell ^\top$ for any pair of
consecutive weight matrices
of the linear part, and the bounded conditioning requires a well-behaved ratio between largest and smallest non-zero singular values of the features. We then show that such assumptions hold for gradient descent training with weight decay: \emph{(i)} for networks with a wide first layer, we prove low training error and balancedness, and \emph{(ii)} for solutions that are either nearly optimal or stable under large learning rates, we additionally prove the bounded conditioning. Taken together, our results are the first to show neural collapse in the end-to-end training of DNNs. | [
"neural collapse",
"gradient descent training",
"weight decay",
"balancedness"
] | Accept (Oral) | https://openreview.net/pdf?id=1HCN4pjTb4 | https://openreview.net/forum?id=1HCN4pjTb4 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zPXleeD4ia",
"tEksYrePPX",
"s2w3dpRn9v",
"rFmOYfHCms",
"gdRU8n93p7",
"gOxg9acnOp",
"eS32vck9mm",
"c8ZQd6zRzC",
"WL38Ntl1ng",
"Vkbr3TH0Qq",
"SBvBuRvwHz",
"PSITCWqx5H",
"MrPkc9RkIe",
"KDeozzAXaY",
"II6L36fU9i",
"HisBi7lYTo",
"HXO6sQqUzn",
"5CJWmi7Ofo",
"3ijlN1ppBV",
"2TQJqcfx5s"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1730703144768,
1731918793424,
1732628129238,
1731919018843,
1737523810041,
1731919102422,
1730059212003,
1731919146990,
1732929565542,
1731918962271,
1730495601984,
1731919191534,
1730587227819,
1732649084357,
1732906271298,
1731918682452,
1734686831975,
1730694867576,
1731918861663,
1731916699341
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_H9Md"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_H9Md"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_39ne"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_PRAd"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_ggM6"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_1hwn"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_39ne"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_ggM6"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7009/Area_Chair_8LpU"
],
[
"ICLR.cc/2025/Conference/Submission7009/Reviewer_PRAd"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7009/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"Dear authors, thank you for submitting your work to ICLR 2025. The paper considers the phenomenon of 'neural collapse' and attempt to extend current rather special case results to brader class of networks, specifically, a deep neural (non-linear) networks with a wide first layer, funnel architecture and several, i.e. 2+, linear layers (head) before the output. After taking several assumptions, paper shows in series of Theorems (3.1, 4.4,5.2) and Propositions (4.5, 5.1,5.3) that GD training with weight decay (under further assumptions) leads to within class variability collapse (NC1). Results are supported by experiments on MNIST and CIFAR and MLP and ResNet + MLP head.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"S1: Novelty, pushing for (more) general results on timely and attractive topic of 'neural collapse' in DNN training\", \"s2\": \"Striving for a theoretically solid argumentation stating necessary assumptions (in Theorems and Propositions) and as Assumtions 4.1, 4.2, 4.3 ...\", \"s3\": \"Well done Introduction positioning paper within existing works (UFM, NTK and other specific results)\", \"weaknesses\": \"W1: Accessibility of the paper for a '15 mins' reader. The main results are formulated using (Abstract, l19) 'balancedness', (Abstract, l20) 'bounded conditioning' and other only later defined phrases and makes it hard to asses the attractivity/usefullness of paper unless reading it whole. It is recommended to rework (the Abstract at least, if no Conclusions are present) to make it clear and self-sustained.\", \"w2\": \"Quite a few assumptions are required in general. More over thay are added along the way, e.g., Assumption 4.2, $|\\\\sigma(x)| \\\\leq |x|$, etc., (which is ok, but also narrows the applicability of the final result). Some of those are very technical (especialy those in Theorems, such as in Theorem 3.1, (2),(3), (4)) but as well Assumption 4.3, Theorem 4.4. (10) and more) and with paper specific notation to learn, e.g., $\\\\lambda_{3 \\\\rightarrow L}$. It would help paper to have an thorough discussion on applicability/limitations of these assumptions. Perhaps at expense of shortening some 'proof sketches' refering them to SM?\", \"w3\": \"'Discussion and Conclusion' sections are not presented. This is most likely due to space constrained and will be added in case of acceptance (?). Yet, I find it impacts the paper quality negatively. Especially in a light of the previous (W2) comments, the discussion and conclusions could have brought a critical 'back to reality' summary. Idealy it would bring more intuition for results and their application. Yet, I find it very important to have such Discussion reviewed before publishing ...\", \"w4\": \"Some references are rather inaccurately interpreted/generalized too much perhaps. For instance the lines 399-400 \\\"...Thankfully, instead of simply diverging, for large \\u03b7 (but not too large) the parameters naturally end up at the \\u2018edge of stability\\u2019: the top eigenvalue of the Hessian is close to $2/\\\\eta$ , i.e., the threshold below which GD is stable...\\\" from (Cohen et all. 2021). Referenced work provides only experimental evidence for a phonomenon and only approximately, i.e., for certain settings operator norm of Hessian exceeds 'stability threshold' $2/\\\\eta$, etc. Than approximation used on l410, especially $O(\\\\epsilon_1)$ is only valid if $\\\\nabla^2_{\\\\theta} Z_L$ norm is bounded, which is ok for NTK regime, but not necessarily for large learning rate regime. Or is it?\", \"w5\": \"Following up on W4, Proposition 5.3, and other Theorem combine NTK with large learning rate regime, which sounds dangerous. Also requirement on wide first layer, suggest a NTK limit case is required. Could authors clarify a bit more on this relation?\\n\\nOverall, I find it to be a solid attractive paper with technically legit reasoning, taking few shortcuts (some noted above) and with missing discussion and conclusions. I suggest authors to work on alleviating weaknesses and discussing limitations to improve contributions of this interesting work significantly.\", \"questions\": \"See Weaknesses for the most concerning questions.\", \"additionaly\": \"\", \"q1\": \"Line 188: \\\"... aproaches 1 ...\\\" Shouldn't it be \\\"... aproaches 2\\\" based on (3)? Is it still sufficient for orthogonality claims (can the proof be adjusted to account for it)?\", \"q2\": \"Proof sketch of Theorem 4.4., lines \\\"306\\\". Why and how is are \\\"two phases\\\" guaranteed to happen during GD training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response (2)\", \"comment\": \"*W4: Some references are rather inaccurately interpreted/generalized too much perhaps. For instance the lines 399-400 \\\"...Thankfully, instead of simply diverging, for large $\\\\eta$ (but not too large) the parameters naturally end up at the \\u2018edge of stability\\u2019: the top eigenvalue of the Hessian is close to $2/\\\\eta$, i.e., the threshold below which GD is stable...\\\" from (Cohen et all. 2021). Referenced work provides only experimental evidence for a phenomenon and only approximately, i.e., for certain settings operator norm of Hessian exceeds 'stability threshold' , etc. Than approximation used on l410, especially is only valid if $\\\\nabla^2_\\\\theta Z_L$ norm is bounded, which is ok for NTK regime, but not necessarily for large learning rate regime. Or is it?*\\n\\n**Response:** We are aware that the evidence for the edge of stability phenomenon is almost only empirical at this stage, and we will make it clearer in the main (and mention some already existing theoretical evidence for toy models, e.g. https://openreview.net/forum?id=p7EagBsMAEO). Basically our goal was to offer two possible assumptions under which the conditioning can be controlled, thus guaranteeing NC2 and NC3 in addition to NC1: global convergence, or stability under large learning rates. There is no theoretical approach to prove either of these assumptions today, but there is empirical evidence for the stability under large learning rates in many different architectural settings. We are reasonably hopeful that the learning rate stability could be proven in the future, though it would probably require some new proof techniques (in contrast we are less confident about global convergence, though we kept it as it is a more common assumption, even though it is very strong).\\n\\nThe approximation of l410 can be proven rigorously under the assumption that the weights are bounded and the interpolation error $\\\\epsilon_1$ goes to zero.\\n\\nWe now go into more details at the end of Section 5.2 of the revision to explain our intuition better.\\n\\n\\n\\n*W5: Following up on W4, Proposition 5.3, and other Theorem combine NTK with large learning rate regime, which sounds dangerous. Also requirement on wide first layer, suggest a NTK limit case is required. Could authors clarify a bit more on this relation?*\\n\\n**Response:** That\\u2019s a very good observation. The reason this is not a problem is that we only need the learning rate to be \\u2018reasonably large\\u2019, i.e. of order $1/L_1$, which is small enough for the NTK regime to be stable (in the NTK regime, the learning rate has to be chosen as $1/||\\\\Theta||\\\\_{op}$ and the NTK $\\\\Theta$ is a sum over the layers, so it typically scales linearly with depth). In contrast, for the moment our proofs require an extremely small learning which is negatively exponential in the depth $L_1$ (see Equation 27 in Theorem B.2), and it appears that these constraints actually come from the second part of training (after the NTK regime), where we only have a control on the parameter norm, and the worst case upper bound on the Hessian over bounded parameters is exponential in depth (Lemma C.1).\\n\\nWe added a more detailed discussion of these aspects at the end of Section 5.2 of the revision.\"}",
"{\"title\": \"After rebuttal comments\", \"comment\": \"Thank authors for a detailed revision and addressing raised concerns sufficiently. Especially discussion and clarifications (NTK + large learning rate regime as per original review) added to the paper are appreciated. Overall, amendments are significant and I raise my score to 6.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for appreciating our work and for the detailed comments. We reply to each of the points raised in the review below. We have also uploaded a revised version of the manuscript highlighting in red the main changes.\\n\\n*W1: My main concern with the paper is the writing. The theoretical statements are very difficult to parse this is especially the case for eqs 2-5 in Theorem 3.1. Can the authors provide a more interpretable version of this theorem (hide constants, big-O etc.) and delay the details to the appendix?*\\n\\n**Response:** This is a very good point. Following the suggestion of the reviewer, we have re-written Theorem 3.1 in a more interpretable form hiding constants and using a big-O notation. The precise version of this result is now deferred to Appendix B (cf. Theorem B.1).\\n\\n*W2: I have the same concern about Theorem 4.4.*\\n\\n**Response:** Similarly, we have simplified the statement of Theorem 4.4, deferring the original version to Appendix B (cf. Theorem B.2).\\n\\n*W3 (Minor): The proof of Theorem 3.1 in the Appendix could also be more clear. For example mentioning Weyl's inequality in the first inequality of (21).*\\n\\n**Response:** Thank you for pointing this out, we have made the intermediate steps of the proof more clear in the revision.\\n\\n*W4 (Minor): In the proof of Theorem 5.2 at the Appendix (line 1264) shouldn\\u2019t it be $\\\\kappa(W_L)=\\\\kappa(W_{L:L_1+1})^{\\\\frac{1}{L_2}}$ not $\\\\kappa(W_L)=\\\\kappa(W_{L:L_1+1})^{\\\\frac{1}{L_1}}$*\\n\\n**Response:** Thanks for noticing this typo. We have corrected it in the revision.\\n\\n*W5 (Minor): Same concern about $L_1$ vs $L_2$ in the statement of Theorem 5.2.*\\n\\n**Response:** Thanks for noticing this typo. We have corrected it in the revision.\\n\\n*Q1: What exactly is the role of these additional linear layers? Are they only required for proving NC2 and NC3?*\\n\\n**Response:** We also need them for our proof of NC1, although there we only require two consecutive linear layers. In particular we need the balancedness property to ensure that the features $Z_{L-1}$ are approximately in the row space of $W_L$ so that the formula $Z_{L-1}=W_L^+Z_L$ approximately holds. \\n\\n\\n*Q2: Do these results suggest that neural networks with 1 non-layer and many linear layers can also exhibit neural collapse?*\\n\\n**Response:** Yes, our results imply that neural networks with 1 non-linear layer and many linear layers provably exhibit neural collapse.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}",
"{\"title\": \"Response (1)\", \"comment\": \"We thank the reviewer for the positive evaluation of our work and for the insightful comments. We address each of them below.\\n\\n*W: The authors don't provide a conclusion or discussion section, and it would have been useful to have comments from the authors about what they think are the weakest parts of their work and what work should be prioritized in the future. I think they could get some additional space by removing some of the details about optimization from Sec 4 since I don't think they're super important.*\\n\\n**Response:** This is a great point. Following the reviewer\\u2019s suggestion, in the revision we have added a final section titled \\u2018Discussion and Concluding Remarks\\u2019. There, we discuss our assumptions, we compare with earlier work by Nguyen & Mondelli (2020) (see also the response to Q4 below), we discuss how our analysis relates to the NTK regime (see also the response to Q5 below), and we conclude with a direction for future research. \\n\\nTo make space and for the sake of readability, we have simplified the statements of Theorems 3.1 and 4.4 using a big-O notation, as suggested by Reviewer 1hwn.\\n\\n\\n*Q1: Are there no assumptions on the activation function needed for Theorem 3.1? None are stated in Section 3, although assumptino 4.2 appears later. It seems odd that a potentially discontinuous and horribly behaved activation function would be permitted (I imagine Lipschitz is required?)*\\n\\n**Response:** Theorem 3.1 concerns the linear head containing at least two layers, after the non-linear part of the network. For this reason, no additional assumption is needed on the activation function. \\n\\n*Q2: Assumption 4.2 seems more like a smooth leaky relu, which has appeared in prior work [Chatterjee arXiv:2203.16462, Frei et al COLT 2022]*\\n\\n**Response:** This is a good point, thank you for pointing out these works. Our assumption is in fact the same as in [Frei et al, COLT 2022] and it is similar to that made in [Chatterjee arXiv:2203.16462]. We have edited the revision accordingly. \\n\\n*Q3: The authors talk about (NC1)-(NC3), what about (NC4) from the original Papyan paper?*\\n\\n**Response:** This is another good point. In the original Papyan paper (as well as in follow-ups), it has been shown that the NC4 property follows from the first three. Thus, if one proves the first three, the fourth one is automatically guaranteed. We did not discuss this explicitly, since many neural collapse papers already addressed this [4] and NC4 is not usually discussed in the recent NC papers [1, 2, 3]. \\n\\n[1] Tirer, Tom, Haoxiang Huang, and Jonathan Niles-Weed. \\\"Perturbation analysis of neural collapse.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] S\\u00faken\\u00edk, Peter, Marco Mondelli, and Christoph H. Lampert. \\\"Deep neural collapse is provably optimal for the deep unconstrained features model.\\\" Advances in Neural Information Processing Systems 36 (2023).\\n\\n[3] Jiang, Jiachen, et al. \\\"Generalized neural collapse for a large number of classes.\\\" arXiv preprint arXiv:2310.05351 (2023).\\n\\n[4] Wu, Robert, and Vardan Papyan. \\\"Linguistic Collapse: Neural Collapse in (Large) Language Models.\\\" arXiv preprint arXiv:2405.17767 (2024).\"}",
"{\"summary\": \"This paper studies neural collapse phenomenon in training wide neural networks. The first result in this work establishes some general conditions (interpolation, balancedness between layers, well-conditioned-ness of weight matrices) such that NC1, NC2, and NC3 can hold. The second result considers training a wide neural network with gradient descent such that the aforementioned conditions hold after training, which implies neural collapse can happen after training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work goes beyond the conventional unconstrained feature model (which many previous works have worked on) and proves that neural collapse can happen after training for full-connected neural networks satisfying some architectural conditions such as pyramidal topology and smooth activation. The conditions established in Theorem 3.1 of this work under which neural collapse can happen seem very reasonable. Indeed, later, the authors proved that those conditions can be satisfied by training a neural network via gradient descent which automatically implies that neural collapse can happen after training.\\n\\nI am not very familiar with the prior works along this line of research such as (Kothapalli & Tirer, 2024), (Hong & Ling, 2024) and many others mentioned in the introduction and related work. Based on my knowledge on neural collapse, I think this work made some interesting contributions towards understanding this phenomenon.\", \"weaknesses\": \"1. The analysis critically relies on the fact that the last two layers of the neural network are linear. I can definitely see this condition makes the problem a lot easier to analyze. I am wondering how hard it is to remove such restrictions.\\n2. It seems the analysis of Theorem 4.4 relies on the neural network in the NTK regime, as the pyramidal topology assumption has appeared in previous works such as (Nguyen & Mondelli, 2020). I don't regard this as a major weakness even if it turned out to be true that the networks are in the NTK regime given the contribution of this work, however, I do appreciate clarification on this.\", \"questions\": \"I am wondering whether the layer balanced-ness property after training can be proved as a direct consequence of [1] in the author's setting.\\n\\n[1] Du, Simon S., Wei Hu, and Jason D. Lee. \\\"Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced.\\\" Advances in neural information processing systems 31 (2018).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response (2)\", \"comment\": \"*Q4: Are there any important differences between the proof of Theorem 4.4 and the proofs/results in Nguyen and Mondelli? I'm not familiar with that work, but it seems like it's not super necessary to have details about optimization (e.g. the PL inequality) in the main section of the paper, it distracts from what I think are the more interesting and stronger results elsewhere in it (it would also give additional space to have a proper conclusion and outline what the authors think are future directions). I am assuming balancedness didn't appear in the prior analysis but does here. A few sentences describing high-level differences between proofs and ideas would be helpful.*\\n\\n**Response:** We now address this point in detail in the paragraph **Comparison with Nguyen & Mondelli (2020)** added in Section 7 of the revision. There, we elaborate on similarities and differences between the proof of Theorem 4.4 and the analysis in Nguyen & Mondelli (2020).\\n\\nThe strategy to handle the first phase of the dynamics in Theorem 4.4 is similar to that in Nguyen & Mondelli (2020): the weights of the network reach an arbitrarily small loss without leaving a suitable ball centered at initialization. However, the implementation of this strategy is significantly different and our approach relies on Proposition 4.5. More precisely, the improvement comes from upper bounding the gradient as in (26) in Appendix B, which uses the PL inequality. In contrast, Nguyen & Mondelli (2020) use the loose bound in (18) of their work. We also note that the analysis of the second phase of the dynamics is entirely new. In fact, the purpose of this second phase is to achieve balancedness, which was not needed by Nguyen & Mondelli (2020) as mentioned by the reviewer. We elaborate on the novelty of our proof strategy also in the subsequent paragraph of Section 7 **NTK regime and beyond**, see the response to Q5 below. \\n\\nGiven that we do provide an improvement upon the earlier analysis by Nguyen & Mondelli (2020) via the PL inequality of Proposition 4.5, we have opted to keep it in the revision. To make space, we have instead simplified the statements of Theorem 3.1 and 4.4. By doing so, we have also been able to discuss a future direction, as suggested by the reviewer, see the concluding paragraph of Section 7 of the revision.\\n\\n*Q5: Also, to be clear, the optimization analysis in Sec 4 is in the NTK regime right? I didn't see this explicitly mentioned but it should be if it is known.*\\n\\n**Response:** Well, yes and no. In the first phase, we are essentially in the NTK regime and use NTK-type techniques to guarantee convergence (technically the pyramidal network setup is not exactly the typical NTK regime, see the last paragraph of Section 1 in (Nguyen & Mondelli, 2020); however, this distinction does not play a major role in our analysis). Nevertheless, there is a second phase, where the weight decay starts to take effect which is not NTK-like. Though our control of this second dynamics is weaker, we can still guarantee that the network will become more balanced and remain interpolating. To be more precise, we have a separation of timescales as $\\\\lambda$ gets smaller: the number of steps needed to reach balancedness is of order $\\\\frac{1}{\\\\eta \\\\lambda}$ (weight-decay timescale). This is significantly later than the interpolation time which is of order $\\\\frac{1}{\\\\eta}$ (NTK timescale). Note that at the end of training we end up with low-rank weight matrices and feature learning (hidden features that are different from their initializations), which could not happen in a purely NTK regime, so the second phase plays an important role.\\n\\nThis strategy/dynamics of \\u201cNTK followed by weight decay\\u201d is to our knowledge novel and it represents an important theoretical contribution of our paper. We have added a paragraph (**NTK regime and beyond**) in Section 7 of the revision about this aspect.\"}",
"{\"comment\": \"Thank you to the author for their thoughtful response and the insightful reference. I think it is a strong paper that brings a nice perspective on the theory of neural collapse.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for appreciating our work. We answer the question below.\\n\\n*Q: In Figure 3 the authors' numerical results show that non-linear layers are increasingly linear, as the depth of the non-linear part increases. Could the authors provide more insights into how this observation relates to their theoretical results and the mechanisms driving this increased linearity?*\\n\\n**Response:** This experimental observation serves as an empirical justification for the usage of linear layers at the end of the network. The more linear the non-linear layers are, the better justified is our usage of linear layers not only as a theoretical construction, but also as a phenomenon that is reasonable in practice. \\n\\nAs for the mechanisms driving this, our intuition is that once the network extracts all the relevant features of the training data, it is best for it to just carry these features all the way to the end of the network. This seems to minimize the total $\\\\ell_2$-norm of the weight matrices. For more intuition on this topic please also see [1], where the author analyzes this emergence of linearity in detail. \\n\\n[1] Jacot, Arthur. \\\"Bottleneck structure in learned features: Low-dimension vs regularity tradeoff.\\\" Advances in Neural Information Processing Systems 36 (2023): 23607-23629.\"}",
"{\"summary\": \"The authors identify a set of conditions under which various neural collapse phenomena provably occur in deep neural nets. They consider neural nets which have a sequence of nonlinear layers and then a sequence of linear layers. They find that approximate interpolation, weight balanced-ness, and boundedness suffice for deriving various neural collapse phenomena. They then show that GD on networks with a \\\"pyramidal\\\" overparameterized topology (i.e., first width is >= number samples, remaining widths are decreasing), under suitable initialization and regularization, allow for one of the neural collapse phenomena to hold. They then identify conditions (near-interpolation and small-norm) which ensure that global minimizers of the loss can satisfy all of the neural collapse phenomena. Finally, they look at the neural collapse phenomena from the perspective of the edge of stability via an analysis of the properties of the hessian under EoS assumptions.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The authors provide a novel characterization of conditions under which neural collapse can provably occur. I am not an expert on the NC theory literature, but to my knowledge, no prior work identified balancedness and interpolation as being key factors which can allow for this to occur, and this is a pretty strong finding. Since the balancedness of post-fixed linear layers is the strongest condition (as most common neural nets do not have multiple linear layers at the end, only a single one), the findings in Section 5 about how boundedness and interpolation can suffice for NC2+NC3 are also a nice addition. The numerical findings nicely complement their theoretical ones.\", \"weaknesses\": \"There aren't any serious weaknesses to me. The strongest assumptions--namely the need for multiple linear layers in order for the NC phenomenon to occur via the main theorem--seem necessary per experiments in Figure 1. So it seems that these assumptions are strong for a fundamental reason.\\n\\nThe pyramidal structure assumption is a bit odd/strong, but only seems needed for the optimization result, which I don't think is the central point of the paper. \\n\\nThe authors don't provide a conclusion or discussion section, and it would have been useful to have comments from the authors about what they think are the weakest parts of their work and what work should be prioritized in the future. I think they could get some additional space by removing some of the details about optimization from Sec 4 since I don't think they're super important.\", \"questions\": \"1. Are there no assumptions on the activation function $\\\\sigma$ needed for Theorem 3.1? None are stated in Section 3, although assumptino 4.2 appears later. It seems odd that a potentially discontinuous and horribly behaved activation function would be permitted (I imagine Lipschitz is required?)\\n\\n2. Assumption 4.2 seems more like a smooth leaky relu, which has appeared in prior work [Chatterjee arXiv:2203.16462, Frei et al COLT 2022]\\n\\n3. The authors talk about (NC1)-(NC3), what about (NC4) from the original Papyan paper?\\n\\n4. Are there any important differences between the proof of Theorem 4.4 and the proofs/results in Nguyen and Mondelli? I'm not familiar with that work, but it seems like it's not super necessary to have details about optimization (e.g. the PL inequality) in the main section of the paper, it distracts from what I think are the more interesting and stronger results elswhere in it (it would also give additional space to have a proper conclusion and outline what the authors think are future directions). I am assuming balancedness didn't appear in the prior analysis but does here. A few sentences describing high-level differences between proofs and ideas would be helpful. \\n\\n5. Also, to be clear, the optimization analysis in Sec 4 is in the NTK regime right? I didn't see this explicitly mentioned but it should be if it is known.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for recognizing the strengths of our work and for the detailed comments. We address each of them below.\\n\\n*W1: The analysis critically relies on the fact that the last two layers of the neural network are linear. I can definitely see this condition makes the problem a lot easier to analyze. I am wondering how hard it is to remove such restrictions.*\\n\\n**Response:** This is a great point. One important assumption in our work is that there is a linear head in the network, containing at least two layers. However, experimental evidence suggests that two linear layers are not necessary for collapse to happen. Removing this restriction is likely to be difficult and it provides an exciting future direction, which would consist of using the approach developed here, i.e., the NTK analysis and the separation of timescales, to prove collapse without two linear layers.\\n\\n*W2: It seems the analysis of Theorem 4.4 relies on the neural network in the NTK regime, as the pyramidal topology assumption has appeared in previous works such as (Nguyen & Mondelli, 2020). I don't regard this as a major weakness even if it turned out to be true that the networks are in the NTK regime given the contribution of this work, however, I do appreciate clarification on this.*\\n\\n**Response:** Well, yes and no. In the first phase, we are essentially in the NTK regime and use NTK-type techniques to guarantee convergence (technically the pyramidal network setup is not exactly the typical NTK regime, see the last paragraph of Section 1 in (Nguyen & Mondelli, 2020); however, this distinction does not play a major role in our analysis). Nevertheless, there is a second phase, where the weight decay starts to take effect which is not NTK-like. Though our control of this second dynamics is weaker, we can still guarantee that the network will become more balanced and remain interpolating. To be more precise, we have a separation of timescales as $\\\\lambda$ gets smaller: the number of steps needed to reach balancedness is of order $\\\\frac{1}{\\\\eta \\\\lambda}$ (weight-decay timescale). This is significantly later than the interpolation time which is of order $\\\\frac{1}{\\\\eta}$ (NTK timescale). Note that at the end of training we end up with low-rank weight matrices and feature learning (hidden features that are different from their initializations), which could not happen in a purely NTK regime, so the second phase plays an important role.\\n\\nThis strategy/dynamics of \\u201cNTK followed by weight decay\\u201d is to our knowledge novel and it represents an important theoretical contribution of our paper. We have added a paragraph (**NTK regime and beyond**) in Section 7 of the revision about this aspect.\\n\\n\\n*Q: I am wondering whether the layer balanced-ness property after training can be proved as a direct consequence of [1] in the author's setting.\\n[1] Du, Simon S., Wei Hu, and Jason D. Lee. \\\"Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced.\\\" Advances in neural information processing systems 31 (2018).*\\n\\n**Response:** This result is indeed very closely related, though what our analysis requires does not seem to be a direct consequence, mainly because we consider gradient descent instead of gradient flow, and we use weight decay/L2 regularization. Nevertheless, we have added a citation of this very relevant work.\"}",
"{\"summary\": \"This paper provides theoretical evidence for end-to-end training of true deep neural networks. This is contrary to previous works which primarily rely on the unconstrained features model. The paper provides explicit bounds on NC1, NC2 and NC3 for both globally optimal ($\\\\ell_2$) regularized deep neural networks as well as neural networks trained with gradient descent. The results provide new insights on the role of additional linear layers, weight decay regularization and large learning rates with respect to neural collapse.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The authors provide the first theoretical analysis of neural collapse that does not rely on the unconstrained features model.\", \"The use of additional linear layers is novel and interesting technique. To the best of my knowledge such an architecture has not been studied.\", \"The results apply to deep neural networks trained with gradient descent (under a particular architecture/nonstandard activation function) as well as networks which are globally optimal for the weight decay regularized objective.\"], \"weaknesses\": [\"My main concern with the paper is the writing. The theoretical statements are very difficult to parse this is especially the case for eqs 2-5 in Theorem 3.1. Can the authors provide a more interpretable version of this theorem (hide constants, big-O etc.) and delay the details to the appendix?\", \"I have the same concern about Theorem 4.4.\", \"### Minor\", \"The proof of Theorem 3.1 in the Appendix could also be more clear. For example mentioning Weyl's inequality in the first inequality of (21)\", \"In the proof of Theorem 5.2 at the Appendix (line 1264) shouldn't it be\", \"$$\\\\kappa(W_L) = \\\\kappa(W_{L:L_1+1})^{\\\\frac{1}{L_2}}$$\", \"not\", \"$$\\\\kappa(W_L) = \\\\kappa(W_{L:L_1+1})^{\\\\frac{1}{L_1}}$$\", \"Same concern about $L_1$ vs $L_2$ in the statement of Theorem 5.2.\"], \"questions\": [\"What exactly is the role of these additional linear layers? Are they only required for proving NC2 and NC3?\", \"Do these results suggest that neural networks with 1 non-layer and many linear layers can also exhibit neural collapse?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your response. I am convinced by the contribution of this submission. I have raised my score to acceptance.\"}",
"{\"comment\": \"Thanks for the detailed response. I think this is a strong paper and it is improved with the described revisions.\"}",
"{\"title\": \"Response (1)\", \"comment\": \"We thank the reviewer for appreciating the novelty of our work and our theoretical analysis, as well as for the detailed comments. We reply to each of the points raised in the review below. We have also uploaded a revised version of the manuscript highlighting in red the main changes.\\n\\n*W1: Accessibility of the paper for a '15 mins' reader. The main results are formulated using (Abstract, l19) 'balancedness', (Abstract, l20) 'bounded conditioning' and other only later defined phrases and makes it hard to assess the attractivity/usefullness of paper unless reading it whole. It is recommended to rework (the Abstract at least, if no Conclusions are present) to make it clear and self-sustained.*\\n\\n**Response:** This is a great point and we have revised accordingly. In particular, we have edited the abstract defining balancedness and bounded conditioning, so that it is now self-contained, see l. 21-24 of the revision. We have also added a final section titled \\u2018Discussion and Concluding Remarks\\u2019, where we start by presenting the main message of the paper, then provide a discussion and conclude with future directions. \\n\\n\\n\\n\\n\\n*W2: Quite a few assumptions are required in general. More over thay are added along the way, e.g., Assumption 4.2, $|\\\\sigma(x)|\\\\le |x|$, etc., (which is ok, but also narrows the applicability of the final result). Some of those are very technical (especialy those in Theorems, such as in Theorem 3.1, (2),(3), (4)) but as well Assumption 4.3, Theorem 4.4. (10) and more) and with paper specific notation to learn, e.g., $\\\\lambda_{3\\\\to L}$. It would help paper to have an thorough discussion on applicability/limitations of these assumptions. Perhaps at expense of shortening some 'proof sketches' refering them to SM?*\\n\\n**Response:** We agree with this and have revised accordingly. A detailed discussion on the assumptions is contained in the new Section 7 of the revision. In particular, we note that Theorem 3.1 provides a connection between neural collapse and properties of well-trained networks, i.e., approximate interpolation, approximate balancedness, bounded representations/weights and good conditioning of the features. As such, the result is rather general and requires no assumptions beyond the aforementioned properties. Theorem 4.4 then instantiates our framework for proving neural collapse to a class of networks with pyramidal topology (Assumption 4.1), smooth activations (Assumption 4.2) and for a class of initializations (Assumption 4.3). These assumptions are used only for the analysis of the first phase of the training dynamics, where the network achieves approximate interpolation. Thus, they could be replaced by any other set of assumptions guaranteeing that gradient descent reaches small training loss. Specifically, such guarantees are obtained by Zou & Gu (2019) for deep ReLU networks (with stronger requirements on over-parameterization but no assumptions on the topology) and by Bombari et al. (2022) for networks with minimum over-parameterization (under a requirement on the topology milder than Assumption 4.1). As concerns Assumption 4.3 on the initialization, we discuss in page 5 a setting where it holds. In addition, by following the argument in Appendix C of Nguyen & mondelli (2020), one readily obtains that Assumption 4.3 also holds for the widely used LeCun's initialization, i.e., $W_\\\\ell^0$ has i.i.d. Gaussian entries with variance $1/n_{\\\\ell-1}$ for all $\\\\ell\\\\in [L]$, as long as $n_1=\\\\Omega(N)$. \\n\\nTo make space and for the sake of readability, we have simplified the statements of Theorems 3.1 and 4.4 using a big-O notation, as suggested by Reviewer 1hwn.\\n\\n\\n*W3: 'Discussion and Conclusion' sections are not presented. This is most likely due to space constrained and will be added in case of acceptance (?). Yet, I find it impacts the paper quality negatively. Especially in a light of the previous (W2) comments, the discussion and conclusions could have brought a critical 'back to reality' summary. Idealy it would bring more intuition for results and their application. Yet, I find it very important to have such Discussion reviewed before publishing \\u2026*\\n\\n**Response:** This is an excellent suggestion, and we have now added the new Section 7 titled \\u201cDiscussion and Concluding Remarks\\u201d. There, we discuss our assumptions, provide a comparison with earlier work by Nguyen & Mondelli (2020), discuss connections with the NTK regime and conclude with future directions.\"}",
"{\"metareview\": \"This paper explores neural collapse, a geometric structure observed in the last layer of deep neural networks (DNNs) at convergence, where training data is consistently represented. Moving beyond the unconstrained features model, the authors study DNNs with at least two linear layers and establish conditions for neural collapse, including low training error, balancedness of linear layers, and bounded conditioning of pre-linear features. They prove these conditions hold during gradient descent training with weight decay, particularly in networks with wide first layers and stable solutions. The authors claim that this work provides the first theoretical demonstration of neural collapse in end-to-end DNN training.\\n\\nThe reviewers raised the following strengths and weaknesses\", \"pros\": [\"First to prove neural collapse in end-to-end DNN training without unconstrained features this paper introduces key conditions like balancedness and bounded conditioning.\", \"This paper validates results with experiments on MNIST and CIFAR; aligns theory with training practices like gradient descent and weight decay.\", \"The paper highlights the role of linear layers, large learning rates, and weight decay in achieving neural collapse.\"], \"cons\": [\"Accessibility Issues: Dense initial presentation; abstract lacks clarity for general readers.\", \"Strong Assumptions: Relies on two linear layers, specific initialization, and pyramidal topology, limiting general applicability.\", \"Missing Aspects: No discussion of NC4; broader implications and applications were underexplored.\", \"Some of these concerns were assuaged by the rebuttal of the authors and all reviewers are in favor of acceptance. I concur\"], \"additional_comments_on_reviewer_discussion\": \"Some of these concerns were assuaged by the rebuttal of the authors and all reviewers are in favor of acceptance.\"}",
"{\"summary\": \"This paper presents an interesting theoretical advancement in understanding neural collapse in practical, end-to-end training scenarios, moving beyond the unconstrained features model. The authors provide a rigorous demonstration that neural collapse arises in networks with linear layers appended to a nonlinear backbone, given conditions of interpolation and balancedness. They show that these conditions hold for sufficiently wide networks trained with gradient descent and L2 regularization. The empirical results further support the theoretical findings, showcasing the robustness of neural collapse across various architectures and datasets.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"This work provides an interesting theoretical insight on the role of training algorithm in the emergence of neural collapse, which I found especially exciting, and I think it opens up new directions for understanding the generalization properties of deep learning models.\", \"weaknesses\": \"In my opinion, this is a solid paper and I can not think of a weakness.\", \"questions\": \"In Figure 3 the authors' numerical results show that non-linear layers are increasingly linear, as the depth of the non-linear part increases. Could the authors provide more insights into how this observation relates to their theoretical results and the mechanisms driving this increased linearity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response (3)\", \"comment\": \"*Q1: Line 188: \\\"... approaches 1 ...\\\" Shouldn't it be \\\"... approaches 2\\\" based on (3)? Is it still sufficient for orthogonality claims (can the proof be adjusted to account for it)?*\\n\\n**Response:** We believe our sentence in the text is correct and (3) really approaches 1. In fact, $\\\\epsilon$ approaches 0 (as $\\\\epsilon_2\\\\to 0$) and $c_3^{1/L_2}$ approaches 1 (as $L_2\\\\to\\\\infty$), which implies that the overall expression evaluates close to 1.\\n\\nIn the revision, we use a big-O notation in (3), which makes explicit the dependence on $\\\\epsilon_2$. This should clarify that the expression approaches 1.\\n\\n*Q2: Proof sketch of Theorem 4.4., lines \\\"306\\\". Why and how are \\\"two phases\\\" guaranteed to happen during GD training?*\\n\\n**Response:** In the proof, the first phase relies on an NTK-type analysis to guarantee interpolation, and the second phase retains interpolation while the network becomes more and more balanced thanks to weight decay. This can also be viewed as a separation of timescales as $\\\\lambda$ gets smaller: the number of steps needed to reach balancedness is of order $\\\\frac{1}{\\\\eta \\\\lambda}$ (weight-decay timescale). This is significantly later than the interpolation time, which is of order $\\\\frac{1}{\\\\eta}$ (NTK timescale). From our proof, we know that the first phase is NTK-like, whereas the second phase is not, since we observe low-rank weight matrices and feature learning at the end of the second phase, which could not arise in a purely NTK regime.\\n\\nThis strategy/dynamics of \\u201cNTK followed by weight decay\\u201d is to our knowledge novel and is an important theoretical contribution of our paper. It allows us to have the best of both worlds: interpolation from the NTK phase, and feature learning/balancedness from the second phase.\\n\\nWe discuss this point in the paragraph \\u2018NTK regime and beyond\\u2019 added to the new Section 7 of the revision.\"}",
"{\"title\": \"General response\", \"comment\": \"We thank the reviewers for the overall positive evaluation of our work and for the detailed comments. We discuss them in detail in the separate responses to each reviewer, and we have uploaded a revision with main changes highlighted in red.\\n\\nFollowing the suggestion of several reviewers, we have added a final section titled \\u2018Discussion and Concluding Remarks\\u2019, where we start by presenting the main message of the paper, then we discuss our assumptions, we compare with earlier work by Nguyen & Mondelli (2020), we discuss how our analysis relates to the NTK regime, and we conclude with a direction for future research.\\n\\nTo make space, as well as to improve the presentation of our results, following the suggestion of reviewer 1hwn, we have simplified the statements of Theorem 3.1 and Theorem 4.4: in the revision, we hide constants via a big-O notation and defer the precise statements to the appendix.\"}"
]
} |
1H90Gb9rJ9 | Optimizing Neural Network Representations of Boolean Networks | [
"Joshua Russell",
"Ignacio Gavier",
"Devdhar Patel",
"Edward Rietman",
"Hava T Siegelmann"
] | Neural networks are known to be universal computers for Boolean functions. Recent advancements in hardware have significantly reduced matrix multiplication times, making neural network simulation both fast and efficient. Consequently, functions defined by complex Boolean networks are increasingly viable candidates for simulation through their neural network representation. Prior research has introduced a general method for deriving neural network representations of Boolean networks. However, the resulting neural networks are often suboptimal in terms of the number of neurons and connections, leading to slower simulation performance. Optimizing them while preserving functional equivalence --lossless optimization-- is an NP-hard problem, and current methods only provide lossy solutions. In this paper, we present a deterministic algorithm to optimize such neural networks in terms of neurons and connections while preserving functional equivalence. Moreover, to accelerate the compression of the neural network, we introduce an objective-aware algorithm that exploits representations that are shared among subproblems of the overall optimization. We demonstrate experimentally that we are able to reduce connections and neurons by up to 70% and 60%, respectively, in comparison to state-of-the-art. We also find that our objective-aware algorithm results in consistent speedups in optimization time, achieving up to 34.3x and 5.9x speedup relative to naive and caching solutions, respectively. Our methods are of practical relevance to applications such as high-throughput circuit simulation and placing neurosymbolic systems on the same hardware architecture. | [
"Neural Networks",
"Boolean Networks",
"Lossless Optimization",
"Integer Linear Programming",
"NPN Classification"
] | Accept (Poster) | https://openreview.net/pdf?id=1H90Gb9rJ9 | https://openreview.net/forum?id=1H90Gb9rJ9 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yofacO8l9P",
"yAjWCWwAQd",
"xaeenccaIn",
"xOLEa9JElp",
"vNWnV95lIF",
"lsORhuJuVC",
"akATVMqEmD",
"YZHoqN5Ezt",
"Y3EQwIY5to",
"W9kqKMnqjk",
"Rl8dOGa00n",
"QfWvyu4WcC",
"PJ30Uj7GXh",
"P6UimvylwQ",
"MglFRdLUNt",
"M82ExGCSUt",
"FE27GrXdXW",
"CcdOmZU4VY",
"AF0dilLRRC",
"8OQ8y5H1op",
"7kt81grRyl",
"16BWg8bVlV"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732992729316,
1737524244456,
1732992527284,
1733123044404,
1732482510093,
1732490155014,
1730721509295,
1734806459386,
1733168592677,
1730618961807,
1733161755765,
1732631030455,
1730662761093,
1733168776025,
1732486144380,
1730648074882,
1732484947372,
1732488371004,
1732486871445,
1733161656699,
1732633133878,
1732485380174
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Reviewer_vaK3"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Reviewer_VSF5"
],
[
"ICLR.cc/2025/Conference/Submission13203/Area_Chair_2mSh"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Reviewer_vaK3"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Reviewer_Wkfo"
],
[
"ICLR.cc/2025/Conference/Submission13203/Reviewer_Wkfo"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Reviewer_eC2r"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13203/Reviewer_eC2r"
],
[
"ICLR.cc/2025/Conference/Submission13203/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Reviewer vaK3,\\n\\nWe sincerely appreciate the effort you have invested in reviewing our paper, and have taken care to address each of your comments and suggestions in our rebuttal.\\n\\nAs the rebuttal period is nearing its conclusion, we wanted to kindly follow up to ensure that our responses have reached you. If there are any further clarifications or points of discussion, we would be happy to address them.\\n\\nWe deeply value your insights and thank you for your time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Dear Reviewer VSF5,\\n\\nWe are grateful for the time you have invested in reviewing our paper. We have carefully studied and addressed each of your comments in our rebuttal.\\n\\nAs the rebuttal period is nearing its conclusion, we wanted to kindly follow up to ensure that our responses have reached you. If there are any further clarifications or points of discussion, we would be happy to address them.\\n\\nWe deeply value your insights and thank you for your time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"comment\": \"Thanks the authors for the detailed answers to my questions and concerns. i think the paper is worthy of an explicit accept\"}",
"{\"title\": \"Global Response to the Reviewers\", \"comment\": \"We sincerely thank the reviewers for their comprehensive evaluation of our submission, their insightful suggestions, and their contributions to enhancing the overall quality of our work. Moreover, we are encouraged by the positive feedback from the reviewers in the initial review, particularly regarding several key aspects:\\n\\n**1) Theoretical contributions.** *The paper provides a solid theoretical foundation [\\u2026] and detailed analysis of the underlying optimization problem (**Reviewer Wkfo**). The paper is technically sound, with rigorous mathematical formulations and proofs supporting the proposed methods (**Reviewer vaK3**).*\\n\\n**2) Novelty.** *The paper tackles the problem of optimizing NN representations of Boolean networks from a fresh perspective (**Reviewer vaK3**). The authors identify a specific gap in existing techniques [\\u2026]. They creatively combine ideas from Boolean function classification (NPN) and convex optimization to develop a new approach to address this gap (**Reviewer vaK3**). The concept of leveraging NPN classes to accelerate the optimization by exploiting shared representations among subproblems is particularly original (**Reviewer vaK3**). The paper establishes a novel framework for lossless optimization techniques for neural network representation of Boolean networks (**Reviewer eC2r**).*\\n\\n**3) Broader impacts.** *The paper addresses an important problem with practical implications for various domains (**Reviewer vaK3**). The experimental results clearly reflect the contribution of this work (**Reviewer eC2r**). The connection to neurosymbolic AI highlights the potential of the work for advancing this emerging field (**Reviewer vaK3**).*\\n\\n**4) Presentation.** *The paper is very well organized and written, providing the necessary background as well as the required proofs to support the claims of the authors (**Reviewer eC2r**). The paper is generally well-written and organized (**Reviewer vaK3**). The steps of the algorithms are clearly presented, and the results are reported in a concise and informative manner (**Reviewer vaK3**).*\\n\\nWe also appreciate the insightful and perceptive questions posed by the Reviewers. We have carefully considered each comment and hope that our responses below adequately address the Reviewers\\u2019 concerns.\\n\\n---\\n\\nThe changelog below summarizes the revisions that have been made to the PDF submission.\\n\\n- **Appendix C (MMP Representation)**\\n 1. We have expanded Remark C.2 to discuss in detail the time complexity for obtaining an MMP representation of a BF, the significance of the $\\\\ell_1$-relaxation on running time, and why we do not consider a relaxation to a linear program over the reals.\\n 2. We have included a new Subsection C.1, which addresses the topic of solution suboptimality due to the $\\\\ell_1$-relaxation of MMP problems.\\n- **Appendix D (Architecture-Aware Lossless Optimization)**\\n 1. We have added a new Remark D.6 on the space complexity of ``optMaintainDepth``.\\n- **Appendix E (NPN Classification Algorithm)**\\n 1. We have significantly expanded Subsection E.4 on the NPN classification algorithm, providing a thorough analysis of time and space complexity in comparison to the baselines.\\n 2. We have included a new Subsection E.5, which relates our NPN classification algorithm to existing NPN classification techniques in logic synthesis, and discusses important considerations behind our proposed algorithm.\\n- **Appendix F (Digital Circuits and Automata used in Experiments)**\\n 1. We have expanded this section to highlight the applicability of our optimization techniques to arbitrary Boolean network domains.\\n 2. In particular, we added a Boolean network construction for deterministic finite automata, as well as the specifications for two automata we present extended experimental results on.\\n- **Appendix G (Extended Results)**\\n 1. In Section G.2, we present new results on optimizing NN representations of BNs that encode deterministic finite automata. We summarize our findings in the text.\\n- **Appendix I (Broader Impacts)**\\n 1. We have extended this section to include an example application scenario for a neurosymbolic AI system that could utilize a homogeneous computing architecture. We explain the significance of the proposed optimization methods for such a system.\"}",
"{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"**Question 1.**\\n\\n> *1. **Generalization to other Boolean network domains.***\\n\\nThank you for your question and suggestion. The technology mapping sequence reviewed in Section 2.2 of the text, and the optimization techniques we propose, are applicable to arbitrary Boolean networks. The only adaptation required for new domains is the specification of a BN construction for the object of interest. \\n\\nIn light of your suggestion, we have updated Appendix F to include a BN construction for deterministic finite automata, as well as the specifications for two automata we present extended experimental results on. In Section G.2 of Appendix G, we present new results on compressing NN representations of BNs that encode these automata.\\n\\n---\\n\\n**Question 2.**\\n\\n> *2. **Scalability analysis and potential optimizations.***\\n\\nRegarding scalability, we have significantly expanded Subsection E.4 of Appendix E, providing time and space complexity analysis for our NPN classification algorithm, as well as for the baseline algorithms *Naive* and *Cached*. In Remark E.12, we compare the three algorithms and also consider the scalability of our NPN classification algorithm to larger BNs. \\n\\nThe matrix multiplication version of `ttToMP` (Lemma B.1), NPN transformations (Remark E.4), and integer linear program constraint evaluation for `mpToMMP` are all operations that are suitable for GPU acceleration. Moreover, as mentioned by the reviewer, the calls to these subroutines could be parallelized. We plan to pursue such implementation optimizations in future work.\\n\\n---\\n\\n**Question 3.**\\n\\n> *3. **Clarifying the impact of NPN transformations on the number of MMP computations.***\\n\\nIn light of your question, we have significantly expanded upon the time complexity analysis for our NPN classification algorithm in Remark E.8 of Appendix E. We hope this exposition makes the reduction factor intuitive. \\n\\nAs an example to explain the lower bound on the number of NPN classes and permuted phase assignment subsets, suppose we have a 3-input BF $\\\\boldsymbol{f}\\\\in\\\\mathbb{Z}^{2^3}$ such that applying the $2^{3+1}3! = 96$ possible NPN transformations to $\\\\boldsymbol{f}$ results in $m=96$ unique BFs. Now suppose we have a 3-LUT BN consisting of all these 96 unique BFs. Then such a BN will have a single NPN equivalence class ($c = \\\\frac{m}{2^{3+1}3!} = 1$), and there will be a single call to `ttToMP` for its NPN canonical form. Since there are $2^{3} = 8$ possible input negations, the NPN class of 96 BFs will split into $s=8$ permuted phase assignment subsets of size $\\\\frac{2^{3+1}3!}{2^{3}} = 2 \\\\cdot 3! = 12$. Hence, there are $s = \\\\frac{m}{2 \\\\cdot 3!} = 8$ permuted phase assignment NPN canonical forms, and we compute `mpToMMP` for each of them. Overall, for this particular BN, we are able to decrease the number of calls to `ttToMP` from 96 to 1, and the number of calls to `mpToMMP` from 96 to 8. We hope this example makes the lower bounds on $c$ and $s$ more concrete.\\n\\nIn Appendix G.3, we provide an empirical analysis of these quantities for the `aes` BN at $K=11$, and discuss how many `ttToMP` and `mpToMMP` computations are being saved by using the proposed NPN classification algorithm.\\n\\n---\\n\\n**Question 4.**\\n\\n> *4. **Connection to Neurosymbolic AI Implementations.***\\n\\nIn light of your suggestion, we have extended the Broader Impacts section of the Appendix to include an example application scenario. In short, we describe a hypothetical computing architecture for autonomous systems that involves decision making using sensor fusion. Our methodology would generate a size- and energy-optimized neural network that is functionally equivalent to the decision-making algorithm and applicable for use in a homogeneous matrix-vector multiplication computing architecture.\"}",
"{\"summary\": \"This paper proposes to optimize neural network representations of Boolean networks by improving the efficiency with NPN classification of sub-networks and considering objective in sub-networks during optimization. It achieves up to 5.9x speedup than the caching solution and reduces neurons and connections of final representations by 60% and 70% respectively.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper proposes to speedup the optimization of neural network representation of Boolean functions and consider architecture constraints. Instead of solving each subproblems independently, the paper finds solutions of each NPN class and exploit shared optimized representations of two-layer sub-networks. The optimization of sub networks is modeled as finding a polynomial with fewer monomials or monomial degrees. In architecture aware lossless optimization part, the level constraints are considered when performing sub-network replacement.\", \"weaknesses\": \"1. The scientific novelty is limited. NPN classification and level constraints based DAG optimization are common techniques used in logic synthesis tools and neural network compilers.\\n2. The k-input LUT technology mapping lacks fair comparison with other traditional DAG optimization methods such as ABC (Boolean DAG optimization tools including technology indepedent and technology dependent optimization) and AI compilers like Google XLA.\\n3. Only two-layer sub-NN optimization is considered which is relatively too local for better neurons and level optimization.\", \"questions\": \"1. Please provide comparison with traditional DAG optimization methods for a fair comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper proposes a new and interesting method for optimizing neural network representations of Boolean networks.\", \"strengths\": \"1. The paper identifies a unique gap in existing network compression methods, and proposes a lossless technique for optimizing a two-layer NN representation of a Boolean function. The introduction of NPN to exploit shared representations is particularly interesting. \\n\\n2. The paper is theoretically solid. Moreover, it provided extensive experimental evaluations. \\n\\n3. The paper is well-written. It is clear, well-structured.\", \"weaknesses\": \"One weakness is about the generalization. The proposed techniques focus on two-layer NNs. Extending the discussion to multi-layer NNs or other types of Boolean networks, such as biological or regulatory networks, would strengthen the claims of general applicability. \\n\\nOverall, this submission is technically sound, addresses a significant problem, and demonstrates very good experimental results. I would like to recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided a detailed rebuttal, addressing most of the reviewers\\u2019 concerns.\"}",
"{\"comment\": \"Thank you for taking the time to review our rebuttal and provide a reply. In a time when responses during the rebuttal period are not always guaranteed, we are especially grateful for your engagement and thoughtful consideration.\"}",
"{\"summary\": \"This paper addresses the challenge of optimizing neural network (NN) representations of Boolean networks (BNs). The authors point out that while NNs can represent Boolean functions, the current state-of-the-art method for deriving these representations often results in suboptimal NNs with a large number of neurons and connections, leading to inefficient simulation. Existing NN compression techniques are either lossy or inapplicable due to the specific nature of the NN-based technology mapping problem.\\n\\nThe paper makes three key contributions. First, it proposes a novel lossless technique for optimizing two-layer NN representations of Boolean functions, focusing on reducing the number of neurons and connections while preserving functional equivalence. This is achieved by formulating the problem as a constrained minimization task and employing a convex relaxation technique to solve it. Second, the authors introduce an objective-aware optimization algorithm that leverages Negation-Permutation-Negation (NPN) classification. This algorithm exploits shared representations among the two-layer sub-networks to significantly speed up the optimization process, demonstrating a substantial speedup over naive and caching-based solutions. Third, an architecture-aware lossless optimization algorithm is proposed, targeting both unmerged and layer-merged NN architectures. This algorithm determines which sub-NNs should be minimized to achieve overall NN size reduction while optionally maintaining the depth of the layer-merged network, which is critical for latency-sensitive applications. Experimental results on benchmark BNs derived from digital circuits show significant reductions in the size of the resulting NNs, confirming the effectiveness of the proposed optimization techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Originality:** The paper tackles the problem of optimizing NN representations of Boolean networks from a fresh perspective. While NN compression is a well-studied area, the authors identify a specific gap in existing techniques, namely the lack of *lossless* methods applicable to the BN-to-NN mapping problem. They creatively combine ideas from Boolean function classification (NPN) and convex optimization to develop a new approach to address this gap. The concept of leveraging NPN classes to accelerate the optimization by exploiting shared representations among subproblems is particularly original and demonstrates a deep understanding of the underlying structure of the problem. Moreover, the introduction of the *leeway* concept and the architecture-aware algorithm for maintaining depth in layer-merged NNs showcases innovative thinking in adapting the optimization process to different NN architectures.\\n\\n**Quality:** The paper is technically sound, with rigorous mathematical formulations and proofs supporting the proposed methods. The authors carefully define the necessary concepts and notation, ensuring clarity in their technical exposition. The experimental methodology is well-designed, with appropriate benchmarks and metrics used for evaluation. The comparison against relevant baselines (naive and caching solutions) provides a strong validation of the proposed techniques. The inclusion of additional results and analysis in the appendix further reinforces the quality of the work.\\n\\n**Clarity:** The paper is generally well-written and organized. The introduction effectively motivates the problem and summarizes the key contributions. The background section provides the necessary context and definitions, though perhaps could benefit from a slightly higher-level motivating example early on for a broader audience. The use of figures, especially Figure 1, significantly aids in understanding the optimization problem and the proposed approach. The steps of the algorithms are clearly presented, and the results are reported in a concise and informative manner.\\n\\n**Significance:** The paper addresses an important problem with practical implications for various domains. Efficient BN simulation is crucial in areas like circuit verification and design automation. The proposed optimization techniques can lead to substantial reductions in NN size and faster optimization times, making NN-based BN simulation more practical and scalable. Moreover, the connection to neurosymbolic AI highlights the potential of the work for advancing this emerging field. The ability to represent symbolic systems efficiently using NNs could pave the way for new hardware architectures and algorithms that combine the strengths of both symbolic and connectionist AI approaches. The paper's focus on lossless compression is also significant from a safety and reliability perspective, as it ensures that the optimized NN representation remains functionally equivalent to the original BN.\", \"weaknesses\": \"1. **Impact of L1 Relaxation:** The authors acknowledge that relaxing the \\u21130-norm objective to \\u21131 might lead to suboptimal solutions. However, the paper lacks a detailed analysis of the extent to which this relaxation affects the quality of the results. Quantifying the gap between the \\u21130 and \\u21131 solutions for the benchmark BNs, or investigating alternative approximation methods for the \\u21130-norm minimization, would provide a more complete understanding of the trade-offs involved. Perhaps experiments comparing the optimized NN size obtained with \\u21131 relaxation to theoretical lower bounds achievable with \\u21130 could highlight the potential room for improvement.\\n\\n2. **Scalability to Larger BNs:** The experimental results suggest that the optimization time can become substantial for large BNs and higher values of K (maximum input size of LUTs). While the NPN classification algorithm offers speedups compared to caching, the paper does not thoroughly investigate the scalability limitations of the overall method. Analyzing the runtime complexity as a function of BN size and K, and potentially exploring strategies for further improving the efficiency of the optimization process (e.g., by leveraging parallelism or more sophisticated data structures), would be beneficial. Consider profiling the algorithms to pinpoint bottlenecks and focus optimization efforts.\\n\\n3. **Clarity and Accessibility for a Broader Audience:** Although the technical content is generally well-explained, the paper could benefit from a more intuitive and accessible introduction to the problem and its significance. Providing a high-level illustrative example that highlights the practical implications of optimizing NN representations of BNs would engage a broader readership within the ICLR community. While the paper currently focuses on a specialized audience with expertise in Boolean functions, making it more approachable for readers with a general machine learning background would enhance its impact.\", \"questions\": \"1. **Generalization to other Boolean network domains:** The experimental results focus primarily on digital circuits. Could the authors elaborate on the applicability of their methods to other types of Boolean networks, such as gene regulatory networks or biological networks? Are there any specific adaptations or considerations needed for these domains? Presenting results on even a small set of non-circuit BNs would greatly bolster the claim of general applicability.\\n\\n2. **Scalability analysis and potential optimizations:** The optimization time appears to grow considerably with BN size and K. Could the authors provide a more detailed analysis of the computational complexity of their methods? Are there any potential optimizations or algorithmic improvements that could be explored to enhance scalability, such as parallelization or more efficient data structures? A breakdown of execution time for different stages of the algorithm would help identify bottlenecks\\n\\n3. **Clarifying the impact of NPN transformations on the number of MMP computations:** The paper mentions that using NPN classification can reduce the number of MMP computations compared to function caching. However, the precise reduction factor ((2^k)!/2^(k+1)k!) is not immediately intuitive. Could the authors provide a more detailed explanation of how this reduction is achieved and its significance in practice, perhaps with a concrete example for a small value of k? It would be especially insightful to directly visualize how many MMP computations are saved for each benchmark circuit.\\n\\n4. **Connection to Neurosymbolic AI Implementations:** The paper mentions the potential of the work for neurosymbolic AI, but the link is somewhat abstract. Could the authors expand on how specifically the proposed methods could be integrated into neurosymbolic systems? For example, are there specific neurosymbolic architectures or frameworks where these optimized NN representations would be particularly beneficial? Perhaps a concrete example application scenario, even if hypothetical, could illustrate the potential.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for taking the time to review our rebuttal and provide a reply. In a time when responses during the rebuttal period are not always guaranteed, we are especially grateful for your engagement and thoughtful consideration.\\n\\nIf our explanations have addressed any uncertainties in your assessment of our submission, we kindly ask you to consider updating your confidence score to reflect this.\\n\\nThank you once again for your time and effort in reviewing our work. Please let us know if there is anything else we can clarify or address.\"}",
"{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you the authors for providing detailed response to my questions and updating the manuscript.\"}",
"{\"summary\": \"This paper presents a new approach to optimizing neural network representations of Boolean networks. The authors propose a technique compressing NNs via minimizing the MP representation of each Boolean function in the network.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper provides a solid theoretical foundation for the proposed approach, such as the equation introduction of new concepts like NPN and detailed analysis of the underlying optimization problem.\", \"Evaluation looks comprehensive, including various bnchmarks, to demonstrate the effectiveness especialy in reducing network size and improving optimization time.\", \"The proposed method looks interesting and novel as it combines multiple techniques including MP-based mapping, NPN equivalence classes and objec-aware optmization, to optimize representations of BNs.\"], \"weaknesses\": \"Please see questions.\", \"questions\": [\"how well does it scale, as solving integer LP can be more demanding than solving LPs.\", \"Computation complexity wiase, how does it compare against SOTAs?\", \"What's the impat of NPN equivalence lasses on the optimization process?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for taking the time to review our rebuttal and provide a reply, we are encouraged by your words. In a time when responses during the rebuttal period are not always guaranteed, we are especially grateful for your engagement and thoughtful consideration.\"}",
"{\"title\": \"Rebuttal by Authors\", \"comment\": \"Many thanks for your positive feedback and insightful comments.\\n\\nWe have uploaded a new PDF submission with changes listed in the\\u00a0*Global Response to the Reviewers*\\u00a0comment. Please refer to the new PDF when reading our responses below.\\n\\n---\\n\\n**Question 1.**\\n\\n> *How well does it scale, as solving integer LP can be more demanding than solving LPs.*\\n\\nThe integer linear program (LP) we solve to obtain an MMP representation of a $k$-input BF is an NP-hard problem, and a brute-force search requires time $2^{O(k2^k)}$. As the reviewer points out, this is indeed a more demanding problem than linear programming, for which polynomial-time algorithms are known. Relaxing the MMP integer LP to a LP over the reals would yield a time complexity polynomial in $2^k$. However, such a relaxation can lead to infeasible solutions that would break functional equivalence \\u2014 the key property our methods are designed to preserve.\\n\\nTo scale to large BNs, our paper discusses the use of other techniques such as $\\\\ell_1$-relaxation of the MMP problem, our proposed NPN classification algorithm, and limiting the size of $k \\\\leq K$ with $K$-LUT mapping.\\n\\nIn light of your question, we have significantly expanded Remark C.2 in Appendix C. Please refer to this remark for a more detailed discussion on this topic.\\n\\n---\\n\\n**Question 2.**\\n\\n> *Computation complexity wise, how does it compare against SOTAs?*\\n\\nThe SOTA NN-based technology mapping solution of Gavier et al. (2023) computes the MP representation of each $k$-input BF in the BN. However, Gavier et al. (2023) do not specify whether this is done per vertex, or per unique BF in the BN. Consequently, we developed baseline algorithms, *Naive* and *Cached,* to reflect these two kinds of approaches. We have added a comprehensive analysis on the time and space complexity of these methods in comparison to the proposed NPN classification algorithm in Appendix E.4.\\n\\n---\\n\\n**Question 3.**\\n\\n> *What's the impact of NPN equivalence classes on the optimization process?*\\n\\nThe architecture-aware lossless optimization algorithms we propose require computing the MP and MMP of each BF in the input BN. Since `ttToMP` and `mpToMMP` are computationally demanding, our goal was to minimize the number of times these subroutines are invoked. If $c$ functions are within the same NPN equivalence class, then we only need to compute `ttToMP` for one of them, and can use NPN transformations to find the MPs for the remaining $c-1$. Similarly, if $s$ functions are within the same permuted phase assignment subset of an NPN equivalence class, then we only need to compute `mpToMMP` for one of them, and can use NPN transformations to find the MMPs for the remaining $s-1$ (assuming the criterion we use for MMPs is PN-invariant). In the latter case, the fact that the MMPs we compute using NPN transformations are indeed minima w.r.t. the PN-invariant criterion means the quality (number of reduced connections and neurons) of the NN optimization process is unaffected by the use of NPN equivalence classes, yet by using them we save calls to `ttToMP` and `mpToMMP`. We provide a more detailed analysis in Appendix E.4.\\n\\n---\\n\\nIn closing, we are more than willing to address any additional questions or provide further clarification if needed.\"}",
"{\"summary\": \"In this work, the authors present an optimization framework for deriving representations of neural networks of Boolean networks. The overall goal of the proposed framework is to overcome known limitations of current state-of-the-art methodologies that result in suboptimal neural networks, in terms of number of neurons and connections, which hinders their application in real-case scenarios. More specifically, the proposed method introduces a lossless technique for optimizing neurons and connections needed in a two-layer NN representation of a Boolean function. This is achieved by establishing an optimization problem that transforms the pruning of the network architecture into monomial reduction tasks for a given polynomial. The lossless functionality between the Minimized Multilinear Polynomial and the represented Boolean function is achieved by incorporating the heavyside threshold of the NN, with the relaxation of the optimization objective to the $l_1$-norm providing the required convexity. Due to the NP-hard nature of the proposed optimization, the authors introduce an objective-aware optimization, which is based on Negation-Permutation-Negation classification, that constructs subclasses of multilinear polynomial representations for the minimization of the Boolean functions, exploiting the shared representations among them and accelerating the NN optimization process using unique function caching. Finally, the paper provides two alternatives for optimizing the Neural Networks, one that involves all the vertices of the binary networks to the minimization of the multilinear polynomial and another that selects the subset of vertices in such a way that the depth of the resulting layer-merged neural network does not increase. The proposed method achieves significant improvements in contrast to the state-of-the-art approach in terms of optimization speed and the required connections and neurons.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is very well organized and written, providing the necessary background as well as the required proofs to support the claims of the authors. The experimental results clearly reflect the contribution of this work.\\n\\nThe proposed method outperforms the current state-of-the-art in terms of decreasing the size of neural networks, which means the number of connections and neurons, while simultaneously preserving the equivalent functionality. The proposed lossless compression, along with the objective-aware optimization, resulting in a faster and more efficient solution than the state-of-the-art.\\n\\nThe paper establishes a novel framework for lossless optimization techniques for neural network representation of boolean networks that can provide further advantages to neurosymbolic AI.\", \"weaknesses\": \"The proposed method is established in two-layer NN representation without discussing the potential generalization on the (2 + n) layer NN representation and the theoretical limits of the proposed method regarding this potential. Taking into account the non-stochastic nature of the proposed NPN transformation and the required time $O(m2^k) + e$, the proposed algorithm seems quite limited to the 2-layer NN representation. However, a further discussion of this can provide fruitful insights for future work.\\n\\nEven though the relaxation of the optimization objective provides the required convexity, the problem still remains NP-hard. Indeed, the proposed deterministic solution ensures the lossless functionality of the binary network with the caching solution providing significant acceleration, hindering, however, the scalability of the proposed method in target networks. To this end, I recommend further discussion of the existing stochastic methodologies in the bibliography for lossy solution, studying the accuracy-efficiency tradeoff between deterministic and non-deterministic methodologies. In my opinion, the deterministic linear programing nature of the proposed optimization method should be noted in the abstract of the paper.\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"Many thanks for your insightful comments.\\n\\nWe have uploaded a new PDF submission with changes listed in the *Global Response to the Reviewers* comment. Please refer to the new PDF when reading our responses below.\\n\\n---\\n\\n**Weakness 1.**\\n\\n>*1. The scientific novelty is limited. NPN classification and level constraints based DAG optimization are common techniques used in logic synthesis tools and neural network compilers.*\\n\\nFor our problem setting, NPN classification can be used to find the MPs within the same NPN class. However, applying NPN classification directly for MMPs will lead to nonoptimal solutions since the optimization cost functions we consider are not invariant within the same NPN class. In this paper, we bring scientific novelty by developing the theoretical results that allow us to utilize NPN classification for MPs and MMPs.\\n\\nWe have added a new section to the Appendix, Section E.5, that discusses the relation of our NPN classification algorithm to existing NPN techniques from logic synthesis, and highlights the subtleties involved with the optimization cost functions we consider.\\n\\nRegarding DAG optimization techniques, we provide a detailed response under our response to **Weakness 2**.\\n\\n---\\n\\n**Weakness 2.** \\n\\n>*2. The k-input LUT technology mapping lacks fair comparison with other traditional DAG optimization methods such as ABC (Boolean DAG optimization tools including technology independent and technology dependent optimization) and AI compilers like Google XLA.*\\n\\nThank you for your comment, it has helped us to realize where our methods fit into the bigger picture of BN and NN optimization.\\n\\nWe would like to begin by saying that our optimization methodology is not incompatible with (i) DAG optimization or (ii) AI compiler techniques. On the contrary, the former can be applied before K-LUT mapping; for convenience, we assume that the BN is already optimized in the DAG sense. The latter are performed over a NN representation, and can be applied after the NN has been optimized with our method. An illustration of these steps is shown below.\\n\\n``\\nBN \\u2192 (i) \\u2192 BN \\u2192 (Our Methods) \\u2192 NN \\u2192 (ii)\\n``\\n\\nWe believe that there may be a misunderstanding regarding the methodology of our paper and how it relates to (i) logic synthesis DAG optimization methods and (ii) AI/DL/ML compilers. To ensure we are on the same page, we will define (i) and (ii), and then relate them to our methods.\\n\\n**(i) Logic synthesis DAG optimization methods:**\\n- Input: BN\\n- Output: size/depth optimized BN\\n\\nLogic synthesis tools, such as ABC [1] and mockturtle [2], support various optimizations for technology-independent and technology-dependent representations of Boolean networks (BNs). Common BN representations include And-Inverter Graphs (AIGs) and K-LUT networks. Optimization methods for AIGs [3, 4] and K-LUT networks [5], such as rewriting, refactoring, and resubstitution [3], seek to minimize size (measured by the number of nodes in the graphs), while other methods, such as algebraic rebalancing [6], seek to minimize depth. \\n\\n**Relating Our Methods to (i)**\\n\\nIn Section 2.2 of the paper, we review the SOTA for NN-based technology mapping (Gavier et al., 2023), which takes a BN as input and converts it into a functionally equivalent NN. Note that the logic synthesis DAG optimization methods can be applied to the BN prior to this representation conversion, which is what Gavier et al. (2023) propose with their Yosys + ABC workflow. Hence, these optimizations are orthogonal to our techniques, and can be applied in combination with them.\\n\\n**(ii) AI compilers:**\\n- Input: NN\\n- Output: optimized high-level IR, CPU/GPU instructions\\n\\nAI compilers take a NN model as input, and convert it into a high-level intermediate representation (IR) which we will refer to as the computation graph. Various optimizations are then applied to this representation, such as node-level optimizations (no-op elimination, zero-dim-tensor elimination), block-level optimizations (algebraic simplification, operator fusion, operator sinking) and dataflow-level optimizations (common sub-expression elimination, dead code elimination). The AI compiler will then perform further optimizations that are technology dependent, based on a target architecture (CPU/GPU etc.) [7].\\n\\n**Relating Our Methods to (ii):**\\n\\nOur technique optimizes the NN representation of the BN, eliminating neurons and connections from linear layers of the Heaviside NN. The computation graph optimizations briefly reviewed in (ii) of AI compilers do not perform such operations. Indeed, the NN we obtain after optimization can be passed to an AI compiler for further optimization. Hence, these optimizations are also orthogonal to our techniques, and can be applied in combination with them.\"}",
"{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We sincerely appreciate your positive feedback and insightful comments.\\n\\nWe have uploaded a new PDF submission with changes listed in the\\u00a0*Global Response to the Reviewers*\\u00a0comment. Please refer to the new PDF when reading our responses below.\\n\\n---\\n\\n**Weakness 1.**\\n\\n> *1. **Impact of L1 Relaxation.***\\n\\nThank you for your insightful comment. We strongly agree that a theoretical analysis on the consequences of the $\\\\ell_1$-relaxation for the MMP optimization problem, and an investigation of alternative approximations for the $\\\\ell_0$-norm minimization, would provide great insight into the nature of the problem and trade-offs involved. However, we believe that such a formal investigation lies beyond the scope of this work. \\n\\nAs a preliminary investigation on the topic, we have added a new Subsection C.1 to Appendix C that addresses the following question regarding solution suboptimality: by optimizing the MP representation of a BF w.r.t. the weighted one-norm, do we ever increase the value of the weighted zero-norm? We find that this does not occur for 2-input and 3-input BFs. However, it does occur for a small fraction of 4-input BFs. \\n\\nAs a more general note, obtaining $\\\\ell_0$-norm minima for every possible BF using brute-force search is computationally intractable, as there are $2^{2^k}$ BFs with $k$ inputs, and finding the minimum of each BF requires time $2^{O(k2^k)}$ (Remark C.2). Hence, exhaustive analyses on $\\\\ell_0$-norm minima for $k\\\\geq 4$ are currently infeasible.\\n\\n---\\n\\n**Weakness 2.**\\n\\n> *2. **Scalability to Larger BNs.***\\n\\nWe address this comment in our response to **Question 2**.\\n\\n---\\n\\n**Weakness 3.**\\n\\n> *3. **Clarity and Accessibility for a Broader Audience.***\\n\\nThank you for your thoughtful feedback on improving the clarity and accessibility of our paper. We appreciate your recognition of the technical content's quality and agree that the introduction could be refined to better connect with a broader audience within the ICLR community.\\n\\nFor the final version, we will explore incorporating a high-level example into the introduction that highlights the practical implications of optimizing NN representations of BNs \\u2014 perhaps with reference to the more concrete example that we have added to the Broader Impacts section as per your suggestion in **Question 4**. We value your comment as a guiding principle for these revisions.\"}",
"{\"title\": \"Rebuttal by Authors\", \"comment\": \"We sincerely appreciate your positive feedback and insightful comments.\\n\\nWe have uploaded a new PDF submission with changes listed in the\\u00a0*Global Response to the Reviewers*\\u00a0comment. Please refer to the new PDF when reading our responses below.\\n\\n---\\n\\n**Weakness 1.**\\n\\n> *The proposed method is established in two-layer NN representation without discussing the potential generalization on the $(2 + n)$ layer NN representation and the theoretical limits of the proposed method regarding this potential. Taking into account the non-stochastic nature of the proposed NPN transformation and the required time $O(m2^k) + e$, the proposed algorithm seems quite limited to the 2-layer NN representation. However, a further discussion of this can provide fruitful insights for future work.*\\n\\nThank you for your perceptive comment. As the reviewer points out, the proposed algorithm optimizes two-layer sub-NN representations of BFs by finding MMPs. Consequently, our approach is centralized around the $k$-input BFs that form the $K$-LUT BN, and the construction that maps MPs/MMPs to two-layer NNs. Extending the existing methodology to consider $(2 + n)$ layer NN representations may therefore require grouping multiple BFs and solving for an optimized NN representation of the group. We agree with the reviewer that either extending the proposed optimization technique or developing new methods to consider optimization across multiple layers presents a promising direction for future research. However, such an investigation lies beyond the scope of this work.\\n\\n---\\n\\n**Weakness 2.**\\n\\n> *Even though the relaxation of the optimization objective provides the required convexity, the problem still remains NP-hard. Indeed, the proposed deterministic solution ensures the lossless functionality of the binary network with the caching solution providing significant acceleration, hindering, however, the scalability of the proposed method in target networks. To this end, I recommend further discussion of the existing stochastic methodologies in the bibliography for lossy solution, studying the accuracy-efficiency tradeoff between deterministic and non-deterministic methodologies. In my opinion, the deterministic linear programing nature of the proposed optimization method should be noted in the abstract of the paper.*\\n\\nThank you for your insightful comment. We strongly agree that a detailed analysis of the accuracy-efficiency tradeoff between the proposed lossless optimization and deterministic/non-deterministic lossy optimization techniques would provide significant insight, and we plan to explore this direction in future work. However, since the focus of this paper was on presenting a new lossless optimization technique, we believe a comprehensive comparison to lossy techniques falls beyond its scope. \\n\\nIn light of your suggestion, we have updated the abstract to state that the optimization algorithm we propose is deterministic.\"}",
"{\"comment\": \"Dear Reviewer VSF5,\\n\\nWe hope this message finds you well. With less than 24 hours remaining for reviewers to post messages to the authors, we wanted to kindly follow up regarding our responses to your comments.\\n\\nWe respectfully ask that you review our submission in light of the additional information we provided in our rebuttal. We would sincerely appreciate at least an acknowledgment of our responses. If there are any remaining points of clarification or concerns, we would be glad to address them promptly.\\n\\nThank you once again for your time and effort in reviewing our work. Your thoughtful consideration means a great deal to us.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"comment\": \"I would like to thank the authors for answering my comment. I will maintain my score.\"}",
"{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"**Weakness 3.**\\n\\n> *3. Only two-layer sub-NN optimization is considered which is relatively too local for better neurons and level optimization.*\\n\\nWe would like to highlight that the parameter $K$ governs the degree of locality in the optimization process. Specifically, as $K$ increases, the optimization becomes progressively more global. For instance, in the extreme case of $K = n$, where $n$ is the number of inputs to the BN, the resulting optimized NN reduces to two-layers, and its connections and hidden neurons are optimized by accounting for the entire functionality of the original BN.\\n\\nExploring alternative approaches that optimize across multiple layers when $K < n$ is an interesting direction for future research; however, it lies beyond the scope of this work.\\n\\n---\\n\\n**Question 1.**\\n\\n> *1. Please provide comparison with traditional DAG optimization methods for a fair comparison.*\\n\\nAs discussed in our response to **Weakness 2**, optimization methods for DAG representations of BNs, including AIG/$K$-LUT representations, are orthogonal to our work. Moreover, optimization methods for DAG representations of NN computation graphs that are utilized by AI compilers are also orthogonal to our work. Consequently, we do not include comparisons with these methods. \\n\\n---\\n\\nIn closing, we appreciate the reviewer\\u2019s insights and recognize that there may be related works we have inadvertently missed. If the reviewer is aware of any specific literature that is closely aligned with our methods, we would be grateful for their recommendations.\\n\\n---\\n\\n**References**\\n\\n[1] https://github.com/berkeley-abc/abc\\n\\n[2] https://github.com/lsils/mockturtle\\n\\n[3] Mishchenko, Alan, Satrajit Chatterjee, and Robert Brayton. \\\"DAG-aware AIG rewriting a fresh look at combinational logic synthesis.\\\"\\u00a0*Proceedings of the 43rd annual Design Automation Conference*. 2006.\\n\\n[4] Li, Yingjie, et al. \\\"DAG-aware Synthesis Orchestration.\\\"\\u00a0*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*\\u00a0(2024).\\n\\n[5] Riener, Heinz, et al. \\\"On-the-fly and DAG-aware: Rewriting Boolean networks with exact synthesis.\\\"\\u00a0*2019 Design, Automation & Test in Europe Conference & Exhibition (DATE)*. IEEE, 2019.\\n\\n[6] Cortadella, Jordi. \\\"Timing-driven logic bi-decomposition.\\\"\\u00a0*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*\\u00a022.6 (2003): 675-685.\\n\\n[7] Li, Mingzhen, et al. \\\"The deep learning compiler: A comprehensive survey.\\\"\\u00a0*IEEE Transactions on Parallel and Distributed Systems*\\u00a032.3 (2020): 708-727.\"}"
]
} |
1GTARJhxtq | Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models | [
"Zachary Ankner",
"Cody Blakeney",
"Kartik Sreenivasan",
"Max Marion",
"Matthew L Leavitt",
"Mansheej Paul"
] | In this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a larger model can yield high-quality data, we investigate whether smaller models can be used for perplexity-based pruning and how pruning is affected by the domain composition of the data being pruned. We demonstrate that for multiple dataset compositions, perplexity-based pruning of pretraining data can significantly improve downstream task performance: pruning based on perplexities computed with a 125 million parameter model improves the average performance on downstream tasks of a 3 billion parameter model by up to 2.04 and achieves up to a 1.45× reduction in pretraining steps to reach commensurate baseline performance. Furthermore, we demonstrate that such perplexity-based data pruning also yields downstream performance gains in the over-trained and data-constrained regimes. | [
"Data",
"Data Filtering",
"Data Pruning",
"Pretraining",
"Perplexity",
"Large Language Model",
"LLM"
] | Accept (Poster) | https://openreview.net/pdf?id=1GTARJhxtq | https://openreview.net/forum?id=1GTARJhxtq | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xe9yYrO9qL",
"vPqafKGHIJ",
"tZyEavXffh",
"t3vC52Bwwu",
"pVeYmNGcb7",
"eGChSe2cBx",
"dJnQIQ5kpG",
"ZywBbWh82H",
"Y0WyY58HQC",
"UKdXMDyDMl",
"Jlrl45zlMe",
"J2SRI5JWL0",
"HIsB4Qqu4J",
"EJ6Z3mORvT",
"B2kQC2TIYK",
"AYGnFt1rRW",
"8BIMgXGP7t",
"2BER45scH8"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment"
],
"note_created": [
1737523966841,
1730470138614,
1732453757530,
1732646444900,
1732261120185,
1732553258997,
1732614064921,
1730838955428,
1732257477094,
1732304195167,
1732257551095,
1732257338726,
1730668124598,
1732256499249,
1732261063523,
1734660788763,
1730302051439,
1732476168803
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9185/Reviewer_wYdV"
],
[
"ICLR.cc/2025/Conference/Submission9185/Reviewer_wjeV"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Reviewer_qYWL"
],
[
"ICLR.cc/2025/Conference/Submission9185/Reviewer_qYWL"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Reviewer_13P9"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Reviewer_wjeV"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9185/Area_Chair_rnXz"
],
[
"ICLR.cc/2025/Conference/Submission9185/Reviewer_13P9"
],
[
"ICLR.cc/2025/Conference/Submission9185/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"The authors filter LLM pre-training data by using the perplexity of a smaller language model. They demonstrate that dataset filtering improves the [initial] learning curve of LLM pre-training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The method is well motivated. Except for some uncommon terminology that is explained in later sections like \\\"non-standard training regime\\\", \\\"over-training\\\" (which is not over-fitting) the paper is clearly written.\", \"weaknesses\": \"L186 suggests that the final models are (pre-)trained for a fixed number of steps, no matter the dataset size. This sets the stage for dataset filtering, since training on the full dataset may go through fewer epochs. It would be interesting to train for long enough to show convergence in the plots in Fig. 1. The story would be more convincing if there is an offset between the blue and red curves even after convergence. In fact, the \\\"over-training\\\" experiment in Sec. 3.4 shows diminishing gains, so I can imagine that they disappear fully at some point. The method would still have merits (steeper pre-training curve), just not the ones claimed in the paper.\\n\\nNovelty. Perplexity-based pruning and countless variations of it are well-studied. The authors set their work apart from prior work in L058, but neither of the arguments (i)-(iii) (evaluation on downstream task, exploration of domain compositions, \\\"non-standard\\\" evaluation regimes) strike me as particularly strong.\\n\\nI don't think that Algorithm 1 is really helping clarity. 1-2 normal equations would be just as expressive and more concise.\", \"edit\": \"my point about novelty was unjustified - I have increased my scores after the rebuttal\", \"questions\": [\"Fig.4 is interesting, but I'm not sure how Fig. 3 is relevant in practice - could you clarify?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you to the authors for their detailed response. The results regarding computational complexity has addressed my concerns.\"}",
"{\"comment\": \"We thank the reviewer for reading our responses and for their further suggestion. We agree that such a comment is important to include in the paper and will add the following to the Discussion section of the final paper:\\n\\n> \\u201cOne limitation of our work is that while domains such as code are largely pruned, we do not have the computational budget required for training models of sufficient size to test the coding ability of models on benchmarks such as HumanEval [1]. It remains important future work to further scale perplexity pruning to larger models such that the impact of pruning specific domains can be better understood.\\u201d\\n\\n## References\\n\\n[1] Chen, Mark, et al. \\\"Evaluating large language models trained on code.\\\" arXiv preprint arXiv:2107.03374 (2021).\"}",
"{\"title\": \"Response to Reviewer wjeV (part 2/2)\", \"comment\": \"> \\u201cA quantized model may lead to better inference efficiency while calculating the perplexity. Was this considered while running the experiments?\\u201d\\n\\nWe did not consider such a method for reducing the perplexity calculation costs at the time of writing the paper but we suspect that it would work and should be used in future experiments.\\n\\n> \\u201cHigh perplexity selection will also inevitably lead to the inclusion of a significant portion of the noisier examples in the overall dataset. How can we determine the proportion of such examples in the final dataset and exclude them reliably?\\u201d\\n\\n\\n\\nWe agree that while selecting higher perplexity samples leads to an improvement in performance, it may bias the dataset to contain noisier samples. One potential method for removing such examples is combining our perplexity-based pruning method with other data filtering methods specifically targeted at noisy examples. One example would be training a simple classifier on curated examples of noisy text [4] and then using that classifier to further prune the perplexity-pruned dataset.\\n\\n\\n\\n\\n> \\u201cMinor typo (line 66): perplexity-basd -> perplexity-based\\u201d\\n\\nThank you for catching this. It is now fixed.\\n\\n\\n\\n> \\u201cIt would be useful to include the following closely related data pruning works in the related work section.\\u201d\\n\\nThank you for bringing these related works to our attention. We will include the updated related works in the final version of our paper.\\n\\n## References\\n\\n[1] Kaplan, Jared, et al. \\\"Scaling laws for neural language models.\\\" arXiv preprint arXiv:2001.08361 (2020).\\n\\n[2] Anthony, Quentin, et al. \\\"Transformer Math 101.\\\" EleutherAI, 18 Apr. 2023, blog.eleuther.ai/transformer-math/.\\n\\n[3] Barton, Tessa. \\\"Calibrating the Mosaic Evaluation Gauntlet.\\\" Databricks, 30 Apr. 2024, https://www.databricks.com/blog/mosaic-research/calibrating-mosaic-evaluation-gauntlet.\\n\\n[4] Gao, Leo. \\\"An empirical exploration in quality filtering of text data.\\\" arXiv preprint arXiv:2109.00698 (2021).\"}",
"{\"comment\": \"We greatly appreciate reviewer 13P9 for engaging with the rebuttal and reading our responses. We are pleased to know that the clarifications were helpful for the reviewer. We would just like to follow up and see if there are any other questions we can answer or information we can provide that would convince you to increase our scores/advocate for acceptance of the paper.\"}",
"{\"comment\": \"Based on your response, since you do not have the capacity to empirically test this for the paper, I think it is critical to mention the unknown effect of pruning on performance in the pruned domains in your paper.\"}",
"{\"summary\": \"The paper proposes that smaller language models effectively prune large datasets in a way that benefits the training of much larger model. Applying perplexity-based pruning techniques, they explore using a small model to filter high-quality subsets of data for training larger models. This approach is interesting because it\\u2019s a cost-effective alternative to using large models for pruning, and is applicable in real settings. The findings indicate benefits for downstream accuracy and training efficiency.\\n\\nThe paper demonstrates that a 125m parameter model can successfully prune data for large models and improve downstream task performance. The paper shows empirical results testing on The Pile and Dolma, two datasets with very different domain structures.\\nThey also study the two settings of over-training and data-constrained setups and provide additional insights.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The goal, and the process, and algorithm are defined and presented very clearly. Experiments cover multiple settings, with different model sizes and training algorithms.\\nThe proposed method is super useful for researchers who investigate practical techniques for data curation, with insightful empirical results. \\nExperiments include two very different dataset distributions, the Pile dataset and Dolma. The work shows thorough experiments for various selection rates and perplexity criteria, presenting strong evidence about settings in which perplexity pruning does and does not work.\", \"weaknesses\": \"Authors claim that datasets pruning increases the proportion of general domain data from web-scraped domains, and decreases the proportion of specific and technical domains. But it is unclear and counter intuitive why training on general domain data improves performance of models on benchmarks. I think the paper lacks analysis to explain this observation.\", \"questions\": \"How do you expect the results to scale on models larger than 3B parameters?\\n\\nHow does models' performance change on domains which are pruned the most?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer 13P9 (part 1/2)\", \"comment\": \"We would like to thank Reviewer 13P9 for the time they spent reviewing our paper. Our responses to the reviewer\\u2019s feedback are listed below.\\n\\n> \\u201cThe main results (Table 1) do not include a random baseline i.e. what is the performance of a model trained on a subset of the data which has a similar size as the perplexity filtered buckets but is selected randomly?\\u201d\\n\\nThe baseline results presented in Table 1 are implemented as described in the reviewer\\u2019s comment as our pruning results are constructed such that the number of tokens post-pruning equals the training duration. Namely, if our experiment has a selection rate of $r_s$ and we have a desired training duration of $D$ examples, then we compute the perplexity and sample from a pool of $\\\\frac{1}{r_s}D$ examples from the original dataset. As such, post-pruning we are left with $D$ examples. The baseline results in Table 1 are constructed by just training on a random set of $D$ examples from the original dataset. As such, the baseline is a random selection that is the same size as the perplexity-pruned data. We understand that this is not stated clearly in the paper, and we will clarify these details in our experimental setup.\\n\\nThe results in Table 1 are conducted under the assumption that the dataset is sufficiently large such that we can prune without requiring multiple passes through the pruned data. We relax this assumption in Section 3.5 where we investigate the data-constrained regime, and the baseline results in Figure 2 are randomly sub-sampled such that they are of the same size as the perplexity pruned data.\\n\\n> \\u201cThe paper does not contain ablations on the size of the reference model and sensitivity of the results to the random split (L113) used for training the reference model. Though exploring this space is computationally expensive, it may be useful to present 1-2 additional data points.\\u201d\\n\\nWe agree that evaluating the impact of reference model size on final performance is an interesting direction for future work. Based on the findings of Marion et al. [1] we expect that increasing the reference model size would lead to increases in pruning performance, but it would indeed be valuable to reconfirm this for our experimental setup. We will add this result for the final paper.\\n\\n The results we present in the paper do factor in the sensitivity to the random split used to train the reference model. Each experiment in the paper is conducted across two seeds, and the same random seed used for training the final model is also used to split the dataset and train the reference model. Thus the gain\\u2019s from perplexity pruning are robust to the random split used as each result presented in the paper is the performance across two different random splits of the training data. We will clarify this detail in our experimental setup for the final paper.\\n\\n> \\u201cIt would be good to see some additional analysis to understand why a high perplexity set works better for one domain while a medium perplexity set works better for others.\\u201d\\n\\nDeveloping a better understanding for why different domain compositions admit different optimal pruning strategies is an interesting direction for future work. One possible explanation can be seen from the fact that the post-pruning domain compositions become more similar (Figure 4). Namely, for both the Pile and Dolma the web crawled domains (cc, openwebtext2, etc.) are upweighted while specialized domains (pubmed-central, the stack) are downweighted suggesting that in both cases it is optimal to have a larger portion of web-data. In order to prune to more web data on the Pile, high perplexity samples must be selected as there is a large proportion of specialized domains which typically have samples with lower perplexity. For Dolma on the otherhand, it is primarily composed of web-data, and as such the medium perplexity samples are retained. We will make this discussion more explicit in the final version of the paper.\"}",
"{\"comment\": \"I appreciate the clarifications and thank the authors for taking the time to write the response.\"}",
"{\"title\": \"Response to Reviewer 13P9 (part 2/2)\", \"comment\": \"> \\u201cL290: \\\"These results show that while the higher quality data resulting from perplexity-based data pruning does still lead to an improvement in downstream performance in the over-trained regime, there is not a relative increase in downstream improvement over the baseline when over-training.\\\" It would be good to understand why this is the case since there are no repeats.\\u201d\\n\\nWhile we agree that such an analysis would be interesting, we believe that it is out of scope for our current work. We would also like to emphasize that the comment in the paper is about the relative delta between pruned performance and baseline performance not improving. Importantly, the absolute gain over the baseline is still largely preserved even as we increase the data budget. That the absolute gain from pruning is fixed regardless of training duration is also consistent with the downstream evaluations conducted at different checkpoints during training (Figure 1). With those results, we find that there is a mostly constant gain from training on pruned data throughout the whole duration of training.\\n\\n> \\u201cL314: \\\"That training on repeated perplexity-pruned data leads to diminishing gains after four repetitions post-pruning suggests that the higher quality data resulting from pruning does not change the point for which repeating data yields diminishing improvements in performance.\\\" This sentence is confusing and should be reworded.\\u201d\\n\\nThank you for bringing this to our attention. In the final version of the paper we will change the wording to be:\\n\\n\\n\\u201cAlthough one might hope that training on higher quality pruned data would allow for more repetitions through the data without saturation, we empirically find that this is not the case.\\u201d\\n\\n> \\u201cIn section 4.2, the paper presents results showing that the pruning affects data composition such that some domains (e.g. web) are oversampled compared to others (e.g. PubMed). It would be useful to perform additional analysis to understand why this is the case e.g. is it possible that the training split (L113) resulted in a smaller proportion of these domains for the reference dataset?\\u201d\\n\\nWe investigated whether the domain compositions of the random splits were skewed and we found that the proportion of all random splits was within +- 1% of the original proportions. As the upweighting of web domains is observed across both the Pile and Dolma for two different pruning strategies, we are inclined to believe that the increased performance of web-domains is a more general phenomenon. While we agree that developing a deeper understanding of why web domain data leads to better LLM performance, we believe it is outside of the scope of our research.\\n\\n\\n## References\\n[1] Marion, Max, et al. \\\"When less is more: Investigating data pruning for pretraining llms at scale.\\\" arXiv preprint arXiv:2309.04564 (2023).\"}",
"{\"title\": \"Response to Reviewer wYdV\", \"comment\": \"We would like to thank Reviewer wYdV for their review and the questions that they asked. We also appreciate that the reviewer finds the method to be \\u201cwell motivated\\u201d. Our responses to the reviewer\\u2019s feedback are listed below.\\n\\n> \\u201cIt would be interesting to train for long enough to show convergence in the plots in Fig. 1. The story would be more convincing if there is an offset between the blue and red curves even after convergence. In fact, the \\\"over-training\\\" experiment in Sec. 3.4 shows diminishing gains, so I can imagine that they disappear fully at some point. The method would still have merits (steeper pre-training curve), just not the ones claimed in the paper.\\u201d\\n\\nWhile training to convergence would be an interesting data point, we don\\u2019t believe this is computational feasible with any reasonable budget. Consider the experiments in which we trained for 5x the Chinchilla training duration for the 1B models. We have plotted the performance at intermediate checkpoints throughout training as done in Figure 1 for the 1B parameter model trained 5x chinchilla on the Pile, and the resulting plot can be accessed [here](https://postimg.cc/vgfBGsBZ). As can be seen, even training for 5x the Chinchilla training duration does not saturate the performance.\\n\\nWe would like to make the further meta point that training to convergence is not the standard practice when training LLMs and the standard is training a model for the compute optimal duration [1]. Additionally, in the age of scaling when one benchmark saturates we evaluate on harder benchmarks instead, and as such the performance people care about is not in the convergened regime.\\n\\n> \\u201cNovelty. Perplexity-based pruning and countless variations of it are well-studied. The authors set their work apart from prior work in L058, but neither of the arguments (i)-(iii) (evaluation on downstream task, exploration of domain compositions, \\\"non-standard\\\" evaluation regimes) strike me as particularly strong.\\u201d\\n\\n\\nWe are only aware of one paper that examines model-based perplexity pruning for pretraining LLMs before ours [2] and as such we believe that it is a false characterization to say this setting is \\u201cwell-studied\\u201d. With regard to this earlier paper, the differences we outline are very significant. Without evaluating based on downstream evaluations, one would conclude that the technique does not work unless the reference model is significantly larger than the final model. This conclusion would severely limit the applicability of perplexity-based data pruning as it would only be a useful technique for training smaller language models.\\n\\nWe also strongly believe that evaluating multiple domain compositions and non-standard training regimes have significant implications. By evaluating multiple domain compositions, our research is actually applicable for practitioners as they can choose the proper pruning setting based on their dataset composition. As stated, we find that the optimal settings for one domain composition may actually lead to worse performance than no pruning on another composition. Furthermore, by evaluating the over-trained and data-constrained settings, we provide the first guidance on when the technique should be expected to work in non-standard settings.\\n\\n> \\u201cI don't think that Algorithm 1 is really helping clarity. 1-2 normal equations would be just as expressive and more concise.\\u201d\\n\\nThank you for this feedback. While we do believe that there are some important details conveyed in the algorithm that would be harder to communicate in text, we agree that it is more complex than necessary. We will simplify the algorithm in the final paper.\\n\\n> \\u201cFig.4 is interesting, but I'm not sure how Fig. 3 is relevant in practice - could you clarify?\\u201d\\n\\nThe purpose of Figure 3 is to provide readers with intuition for both the differences in the distribution of text between the Pile and Dolma and to demonstrate what effect pruning has on the distributions. Namely, the Pile is composed of many domains and as such its distribution has multiple modes while Dolma is predominantly a single domain and correspondingly unimodal. Additionally, we make the point that the perplexity distribution of both datasets has a very similar shape post-pruning. This suggests a potential reason why different pruning strategies are superior on different domain compositions, as different pruning strategies are needed to achieve similar perplexity distributions post-pruning.\\n\\n## References\\n[1] Hoffmann, Jordan, et al. \\\"Training compute-optimal large language models.\\\" arXiv preprint arXiv:2203.15556 (2022).\\n\\n[2] \\u200b\\u200bMarion, Max, et al. \\\"When less is more: Investigating data pruning for pretraining llms at scale.\\\" arXiv preprint arXiv:2309.04564 (2023).\"}",
"{\"summary\": \"This paper presents a perplexity-based pruning method for reducing the size of pre-training datasets. The effect of pruning is evaluated through the performance on downstream tasks as well. Two datasets are used for evaluation: Pile and Dogma. The pruning efficacy is determined for over-trained and data-constrained regimes as well.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses an important problem of pruning the pre-training datasets to enable efficient training of LLMs.\", \"The experiments are thorough and cover different dimensions of perplexity-based pruning.\", \"The paper is well-written and the results are presented clearly.\", \"The findings are significant, as they show that perplexity-based data filtering can not only reduce the size of the pre-training datasets, it also leads to better performance on certain downstream tasks.\"], \"weaknesses\": [\"The paper does not currently cover the computational complexity of the proposed pruning procedure. A few important questions that need to be considered in this regard:\", \"How do the computational requirements for perplexity-based pruning increase with the size of the dataset to be pruned?\", \"How does the cost of computing perplexity (before pruning) amortize over the efficiency improvements achieved while pretraining the model on the pruned datasets?\", \"A discussion for choosing the right perplexity pruning method (low, medium, high) for the dataset should be included for the practitioners. From the experimental results, we can see that high perplexity selection performs better on Pile while medium perplexity selection is better for dolma. Can we extract any patterns from these results and other experiments that can be generalized to other datasets?\", \"For example, prior theory on data pruning for vision tasks shows that the optimal pruning strategy changes depending on the amount of initial data. When data is abundant, the better pruning strategy is to keep harder examples. In contrast, for smaller datasets, keeping the easier examples leads to better performance. [1]\", \"The results show that test set perplexity may not always be a sound metric for evaluating a pruning strategy and that downstream evaluation is necessary. What should be the cheapest way of conducting the downstream evaluation of the correct perplexity pruning method, i.e., the one that can yield reliable results at a minimal cost? For example, could there be a small set of representative downstream tasks or metrics that could serve as efficient proxies for full downstream evaluation?\"], \"references\": \"[1] https://arxiv.org/abs/2206.14486\", \"questions\": [\"A quantized model may lead to better inference efficiency while calculating the perplexity. Was this considered while running the experiments?\", \"High perplexity selection will also inevitably lead to the inclusion of a significant portion of the noisier examples in the overall dataset. How can we determine the proportion of such examples in the final dataset and exclude them reliably?\", \"Minor typo (line 66): perplexity-basd -> perplexity-based\", \"It would be useful to include the following closely related data pruning works in the related work section:\", \"https://arxiv.org/abs/2403.07384\", \"https://arxiv.org/abs/2402.09668\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer qYWL\", \"comment\": \"We would like to thank Reviewer qYWL for their detailed review of the paper and for finding the proposed method \\u201csuper useful\\u201d. Our responses to the reviewer's questions are listed below.\\n\\n> \\u201cAuthors claim that datasets pruning increases the proportion of general domain data from web-scraped domains, and decreases the proportion of specific and technical domains. But it is unclear and counter intuitive why training on general domain data improves performance of models on benchmarks. I think the paper lacks analysis to explain this observation.\\u201d\\n\\nAs the upweighting of web domains is observed across both the Pile and Dolma for two different pruning strategies, we are inclined to believe that the increased performance of web-domains is a more general phenomenon of llm pretraining. While we agree that developing a deeper understanding of why web domain data leads to better LLM performance, we believe it is outside of the scope of our research.\\n\\n\\n\\n> \\u201cHow do you expect the results to scale on models larger than 3B parameters?\\u201d\\n\\nWhile we believe that using a 125M parameter reference model to prune data for the Pile would continue to yield gains past 3B parameters, there is too much uncertainty to make any claims for Dolma. On the Pile, the gap between the average pruned performance and the average baseline performance actually grows from 1.89 to 2.04 for the 1B and 3B parameter final models respectively. However, on Dolma the gap between the average pruned performance and the average baseline performance shrinks from 1.51 to 0.59 for the 1B and 3B parameter final models respectively.\\n\\nWe would like to emphasize that this is not to say that we don\\u2019t believe the method would for models larger than 3B on Dolma. Even if using a 125M parameter reference model stopped showing gains past 3B final parameters, we believe that through scaling the size of the reference model proportionality to the final model size we would continue to see gains on Dolma.\\n\\n> \\u201cHow does models' performance change on domains which are pruned the most?\\u201d\\n\\nThe model sizes and token budgets we evaluate aren\\u2019t sufficiently large to get signal on the domains that are primarily pruned (i.e., code tasks). The closest category from the evaluation gauntlet that we use is the \\u201cSymbolic Problem Solving\\u201d category. As seen in Table 1, while there is no statistically significant difference for the 1B models, the baseline models outperform the models trained on pruned data for the symbolic problem-solving category at the 3B model scale. We hope to fully investigate how pruning affects coding performance at scale in future work. If performance does degrade on coding tasks, we believe that there would be promising strategies to mitigate this such as domain upsampling at the end of training [1] or applying pruning on a per-domain basis.\\n\\n\\n\\n## References\\n[1] Blakeney, Cody, et al. \\\"Does your data spark joy? Performance gains from domain upsampling at the end of training.\\\" arXiv preprint arXiv:2406.03476 (2024).\"}",
"{\"title\": \"Response to Reviewer wjeV (part 1/2)\", \"comment\": \"We would like to thank Reviewer wjeV for the time that they spent reviewing our paper. Our responses to the reviewer\\u2019s feedback are listed below.\\n\\n> \\u201cThe paper does not currently cover the computational complexity of the proposed pruning procedure. A few important questions that need to be considered in this regard:\\n> * How do the computational requirements for perplexity-based pruning increase with the size of the dataset to be pruned?\\n> * How does the cost of computing perplexity (before pruning) amortize over the efficiency improvements achieved while pretraining the model on the pruned datasets?\\u201d\\n\\nWe will add a discussion of the compute requirements to the paper as we agree it is important. Assuming that the reference model is fixed (i.e., the same size reference model trained on the same number of tokens) as we do, then the only compute required is for performing inference from the reference model over the dataset to be pruned. So the pruning compute grows as $O(N_{ref} \\\\frac{D}{r_s})$ where $N_{ref}$ is the number of reference parameters, $r_s$ is the selection rate, and $D$ is the number of training tokens. As $N_{ref} << N_{final}$, this operation is relatively cheap compared to the overall cost.\\n\\nWe now turn towards the total computational complexity of perplexity pruning. As our method does not change the computation performed on the forward or backward pass, and only affects which tokens are trained on, we can analyze the compute requirements in terms of the total number of operations. We will approximate the cost of training as $6ND$ where $N$ is the number of parameters and $D$ is the number of training tokens, and we will approximate the cost of computing a sequence\\u2019s perplexity in inference mode as $2ND$ [1][2]. Assuming a reference model of size $N_{ref}$, number of reference training tokens $D_{ref}$, final model size $N_{final}$, number of final tokens $D_{final}$, selection rate $r_s$, and fraction of tokens for the pruned data to achieve the same performance as the baseline $F$, the relative cost of perplexity pruning compared to the baseline is:\\n\\n$$\\n\\\\frac{2N_{ref} \\\\frac{D_{final}}{r_s} + 6N_{ref} D_{ref} + 6N_{final} FD_{final}}{6N_{final}D_{final}} = \\\\frac{N_{ref}}{N_{final}}(\\\\frac{1}{3r_s} + \\\\frac{D_{ref}}{D_{final}}) + F\\n$$\\n\\nAll our reference models are 125M parameters, trained for 26B tokens, and we use $r_s = 0.5$ throughout. The 1B models are trained on 26B tokens giving us $\\\\frac{N_{ref}}{N_{final}}(\\\\frac{1}{3r_s} + \\\\frac{D_{ref}}{D_final}) = \\\\frac{125 \\\\times 10^6}{1.3 \\\\times 10^9}(\\\\frac{2}{3} + \\\\frac{26 \\\\times 10^9}{26 \\\\times 10^9}) = 0.16$.\\nFor the Pile, $F=0.76$ giving us a relative cost of $0.16 + 0.76 = 0.92$ and for Dolma $F=0.78$ giving us a relative cost of $0.16 + 0.78 = 0.94$.\\nThe 3B models are trained on 54B tokens giving us $\\\\frac{N_{ref}}{N_{final}}(\\\\frac{1}{3r_s} + \\\\frac{D_{ref}}{D_final}) = \\\\frac{125 \\\\times 10^6}{2.7 \\\\times 10^9}(\\\\frac{2}{3} + \\\\frac{26 \\\\times 10^9}{54 \\\\times 10^9}) = 0.05$.\\nFor the Pile $F=0.69$ giving us a relative cost of $0.69 + 0.05 = 0.74$ and for Dolma $F=0.88$ giving us a relative cost of $0.88 + 0.05 = 0.93$.\\nAs can be seen, perplexity pruning leads to a total reduction in cost all experiments. We would also like to emphasize that the cost of reference model training is typically amortized as it's only performed once per dataset. I.e. we used the same reference models to prune the data for both the 1B and 3B models.\\n\\n> \\u201cA discussion for choosing the right perplexity pruning method (low, medium, high) for the dataset should be included for the practitioners. From the experimental results, we can see that high perplexity selection performs better on Pile while medium perplexity selection is better for Dolma. Can we extract any patterns from these results and other experiments that can be generalized to other datasets?\\u201d\\n\\nWe agree that such a discussion would be useful and will add it to the final paper. Our results are still useful for practitioners without needing to extrapolate any trends as the Pile and Dolma cover the two primary types of domain compositions for pretrainnig datasets. Namely, datasets are either composed of lots of specialized, skilled domains or predominantly general web scrapes. As we test both these settings, we believe practitioners can use the same settings we find depending on which of the two types of dataset compositions they have.\"}",
"{\"metareview\": \"This paper presents a study on using smaller language models to prune large datasets effectively, aiming to improve the training efficiency of larger models. The proposed perplexity-based pruning technique is evaluated on two distinct datasets, The Pile and Dolma, which vary in domain structure. The study encompasses different experimental settings, namely over-training and data-constrained setups. The paper concludes that smaller models can successfully select high-value data subsets, which enhance training efficiency and downstream accuracy for larger models.\", \"this_paper_tackles_a_significant_challenge_in_natural_language_processing\": \"the efficient training of LLMs through dataset pruning. By leveraging smaller models for perplexity-based data pruning, the proposed method is a cost-effective alternative to using large models. The extensive empirical results across varied datasets and experimental settings provide persuasive evidence of the effectiveness and practical applicability of this approach. The paper's insights into data composition effects and pruning strategies enrich the understanding of dataset management in model pre-training, making it a valuable contribution to the field.\", \"suggestions\": \"1. A deeper analysis is recommended to clarify why training on general domain data enhances model performance on benchmarks, despite domain-specific prunings which seem counterintuitive. Although the authors note this is outside their current scope, addressing this point would enhance the paper's comprehensiveness.\\n2. The paper should include a discussion of the computational requirements regarding perplexity pruning complexity. While responses address this, embedding it in the paper would guide practitioners better.\\n3. A clearer presentation of the random baseline setup is vital, as well as an explicit clarification in the experimental setup, to make these comparisons transparent. Some further ablation studies or sensitivity analysis related to the reference model size and pruned datasets would benefit the study, albeit acknowledged as resource-intensive.\\n4. Simplifying some parts of the paper, such as Algorithm 1, will assist in enhancing the accessibility of the methodologies delineated.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers initiated several insightful discussions, specifically about the novelty of using perplexity-based pruning for LLMs, differences in pruned dataset compositions, and strategies based on dataset domains. Reviewers raised questions about computational complexities, methodological clarifications, and downstream continuum for over-trained scenarios. Through author's responses, many areas were clarified, which significantly improved reviewer understandings. The discourse notably resulted in an improved appreciation and alignment of reviewer scores on the practical novelties presented by the study, specifically after rebuttal clarifications around benchmarking without convergence due to resource constraints.\"}",
"{\"summary\": \"This paper investigates whether a small model can be used to perform perplexity based data selection for a larger model. The key findings are that 1) a reference model with 30x fewer parameters compared to the larger model can be used to identify a subset of the training data which can improve the performance of the larger model relative to no pruning. 2) the filtered data subset can speed up training of the larger model, 2) the improvements carry over to some extent to over training and data constrained regimes, 3) ideal pruning criteria can vary by dataset e.g. for Pile, a high perplexity subset performs better while for Dolma, a medium perplexity subset works the best. The paper shows that test data perplexity is not a good indicator of the downstream task performance when using perplexity based pruning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Describes a simple approach to improve the performance of large language models using perplexity based data filtration using a smaller reference model.\", \"Presents useful results e.g. 1) filtration criteria varies by dataset type and 2) test set perplexity is not a good indicator of the downstream task performance.\"], \"weaknesses\": [\"The main results (Table 1) do not include a random baseline i.e. what is the performance of a model trained on a subset of the data which has a similar size as the perplexity filtered buckets but is selected randomly?\", \"The paper does not contain ablations on the size of the reference model and sensitivity of the results to the random split (L113) used for training the reference model. Though exploring this space is computationally expensive, it may be useful to present 1-2 additional data points.\", \"It would be good to see some additional analysis to understand why a high perplexity set works better for one domain while a medium perplexity set works better for others.\"], \"note\": \"The authors have addressed some of these concerns (random baseline/sensitivity to random split) in the rebuttal.\", \"questions\": [\"L290: \\\"These results show that while the higher quality data resulting from perplexity-based data pruning does still lead to an improvement in downstream performance in the over-trained regime, there is not a relative increase in downstream improvement over the baseline when over-training.\\\" It would be good to understand why this is the case since there are no repeats.\", \"L314: \\\"That training on repeated perplexity-pruned data leads to diminishing gains after four repetitions post- pruning suggests that the higher quality data resulting from pruning does not change the point for which repeating data yields diminishing improvements in performance.\\\" This sentence is confusing and should be reworded.\", \"In section 4.2, the paper presents results showing that the pruning affects data composition such that some domains (e.g. web) are oversampled compared to others (e.g. pubmed). It would be useful to perform additional analysis to understand why this is the case e.g. is it possible that the training split (L113) resulted in a smaller proportion of these domains for the reference dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We greatly appreciate the reviewer for engaging with the rebuttal and reading our responses. We are pleased to know that the discussion regarding computational complexity addressed the reviewer's concerns. We would just like to follow up and see if there are any other questions we can answer or information we can provide that would convince you to increase our scores/advocate for acceptance of the paper.\"}"
]
} |
1GPN2oa7P7 | ClipGrader: Leveraging Vision-Language Models for Robust Label Quality Assessment in Object Detection | [
"hong lu",
"Yali Bian",
"Rahul C. Shah"
] | High-quality annotations are essential for object detection models, but ensuring label accuracy — especially for bounding boxes — remains both challenging and costly. This paper introduces ClipGrader, a novel approach that leverages vision-language models to automatically assess the accuracy of bounding box annotations. By adapting CLIP (Contrastive Language-Image Pre-training) to evaluate both class label correctness and spatial precision of bounding box, ClipGrader offers an effective solution for grading object detection labels. Tested on modified object detection datasets with artificially disturbed bounding boxes, ClipGrader achieves 91\% accuracy on COCO with a 1.8% false positive rate. Moreover, it maintains 87% accuracy with a 2.1% false positive rate when trained on just 10% of the COCO data. ClipGrader also scales effectively to larger datasets such as LVIS, achieving 79% accuracy across 1,203 classes. Our experiments demonstrate ClipGrader’s ability to identify errors in existing COCO annotations, highlighting its potential for dataset refinement. When integrated into a semi-supervised object detection (SSOD) model, ClipGrader readily improves the pseudo label quality, helping achieve higher mAP (mean Average Precision) throughout the training process. ClipGrader thus provides a scalable AI-assisted tool for enhancing annotation quality control and verifying annotations in large-scale object detection datasets. | [
"label quality",
"clip",
"object detection"
] | Reject | https://openreview.net/pdf?id=1GPN2oa7P7 | https://openreview.net/forum?id=1GPN2oa7P7 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zf5FtXiEKD",
"wMomlNhH2k",
"NCep1dAOrd",
"MT6cQ9px35",
"Etm3nU11ev",
"8yaaa1kgwM",
"33IyXzxIvm"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review",
"official_review",
"official_review",
"meta_review"
],
"note_created": [
1730402999668,
1730358999155,
1737523968682,
1730666318867,
1730531783463,
1730659052281,
1735022161729
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9219/Reviewer_BMBP"
],
[
"ICLR.cc/2025/Conference/Submission9219/Reviewer_UE2r"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9219/Reviewer_2ZSA"
],
[
"ICLR.cc/2025/Conference/Submission9219/Reviewer_u55H"
],
[
"ICLR.cc/2025/Conference/Submission9219/Reviewer_9kcp"
],
[
"ICLR.cc/2025/Conference/Submission9219/Area_Chair_fGtJ"
]
],
"structured_content_str": [
"{\"summary\": \"Proposes CLIPGrader, an approach to fine-tune CLIP to judge the quality (correctness of box position and label) of detection bounding boxes.The approach is shown to achieve high accuracy at label assessment on COCO and LVIS and shown to improve the performance of semi-supervised object detection methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"\\u2013 The paper is well-written and easy to follow\\n\\n\\u2013 The proposed method seems to be quite effective at assessing label quality\", \"weaknesses\": \"\\u2013 The paper makes limited technical contributions. It\\u2019s main contribution \\u2013 empirically showing that CLIP can be fine-tuned to assess label quality \\u2013 is interesting but in my opinion not substantial. The proposed finetuning strategy is a straightforward generalization of the original CLIP objective (and bears similarity to the supervised contrastive loss [A], which the paper should cite).\\n\\n\\u2013 The motivation of the paper is somewhat weak. Why is CLIP well-suited to label assessment, beyond being a popular multimodal model? Why not use a specialized approach like RegionCLIP [B], which has been designed with detection in mind? \\n\\n\\u2013 The problem formulation is also somewhat contrived. Why treat label quality assessment as a classification problem rather than regressing to the correct coordinates (or a delta therefrom)? Regression would alleviate the need to arbitrarily define \\u201cgood\\u201d and \\u201cbad\\u201d bounding boxes, and allow for more fine-grained metrics (like AP). \\n\\n\\u2013 The paper primarily measures accuracy based on test-set label assessment accuracy. A far more helpful measure would be performance of detection methods that factor in the predicted label quality (maybe by loss weighting, or pseudolabel filtering). While the paper does include a single experiment in Sec 4.4 on semi-supervised object detection, I think a comprehensive set of additional experiments is required to verify that the proposed task and model is actually useful for a real-world task.\\n\\n\\u2013 The paper studies a synthetic label assessment setup, as the \\u201cbad\\u201d bounding boxes are generated by randomly perturbing bounding boxes. While this is reasonable as a starting point, the paper would be strengthened with experiments on \\u201cin the wild\\u201d datasets (eg. by having humans annotate ground truth boxes in an existing evaluation set as \\u201cgood\\u201d and \\u201cbad\\u201d). This is particularly important since prior work has shown that labeling errors in detection are not always random and in-fact can be class and annotation protocol dependent [C].\\n\\n\\u2013 The dataset contains examples of good and bad bounding boxes for each class as well as background boxes, but does not include examples of good bounding boxes but for the wrong class? How does the translate to the model\\u2019s performance?\\n\\n[A] Khosla, Prannay, et al. \\\"Supervised contrastive learning.\\\" NeurIPS 2020\\n\\n[B] Zhong, Yiwu, et al. \\\"Regionclip: Region-based language-image pretraining.\\\" CVPR 2022\\n\\n[C] Liao, Yuan-Hong, et al. \\\"Transferring Labels to Solve Annotation Mismatches Across Object Detection Datasets.\\\" ICLR 2024\", \"questions\": \"See weaknesses above for a detailed list of questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"High-quality annotations are essential for object detection tasks. This paper propose to leverage vision-language models (e.g. CLIP) to automatically assess the accuracy of bounding box annotations. The author tried a lot of ways, including prompt engineering, changes in model size, and different model fine-tuning strategies. The final results demonstrate that the proposed approach can identify errors in existing COCO annotations, highlighting its potential for dataset refinement.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper leverages vision-language models for label quality assessment is valuable.\\n2. The experimental results show the potential ability for dataset refinement.\", \"weaknesses\": \"1. Employing pre-trained models to relabel or denoise dataset is not novel. A large amount of literature has demonstrated the ability of multimodal models.\\n2. The contribution of this paper is limited. \\n3. The experimental results are not novel. As the CLIP model has been trained on a large number of text-image datasets, including the COCO dataset used in this experiment. I think this paper is more of a good attempt in engineering, leaning towards a technical report.\", \"questions\": \"See the Weaknesses.\\n\\n#\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The paper proposes to re-purpose CLIP to evaluate object detection label qualities. CLIP is firstly introduced to align image-level semantics and image captions. On the contrary, this papers leverage visual prompt to promote awareness in certain image regions. The experimental results show that the CLIP-grader achieve non-trivial performances even with 1% COCO data.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The idea to use visual prompts in CLIP to evaluate detection labels is novel\", \"The performances with large enough training data sounds strong (low false positive rates)\", \"When using CLIP-grader to improve pseudo-labels, the performances improvements persists. It is non-trivial to translate the performances improvements from labels to model performances in a data-centric manner.\"], \"weaknesses\": [\"Lack of baselines of CLIP-grader when evaluated recall and false positive rates (Table 1)\", \"Lack of deeper analysis of the tail classes and small bounding boxes, which are considered much more important in object detection.\", \"Limited zero-shot performances in Sec. 4.2\"], \"questions\": [\"The process of synthesizing unrealistic bounding boxes needs more clarification. What defines a realistic distribution of incorrect boxes? Even preliminary insights would be valuable.\", \"While CLIPGraders shows promise in evaluating pseudo-labels, can it also improve noisy human annotations? A small-scale study exploring this application would strengthen the paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces ClipGrader, a framework for automatically assessing bounding box quality in object detection datasets by adapting CLIP, a vision-language model, to evaluate label accuracy and spatial alignment of bounding boxes. Evaluation on COCO and LVIS datasets demonstrates the effectiveness of ClipGrader.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-organized and easy to follow, different components are illustrated in detail, making the framework comprehensible and logically structured.\\n2. Adapting CLIP to assess bounding box quality is innovative and addresses a real challenge in maintaining large-scale object detection datasets. This repurposing of CLIP as a \\u201cgrader\\u201d rather than a classifier or detector is novel and promising.\", \"weaknesses\": \"1. While the ablation studies (Section 4.3) are detailed, adding a quantitative comparison with simpler baseline methods, such as non-CLIP-based label grading techniques or direct confidence-based bounding box assessments, would strengthen the claim of ClipGrader\\u2019s superiority.\\n2. As mentioned in the introduction (Section 1), widely used datasets such as COCO are subject to label errors and ClipGrader can be used to assess object detection annotations, it would be better to use ClipGrader to filter incorrect annotations in COCO training set and show some performance gains on the test set to validate the usefulness of the proposed method.\\n3. It\\u2019s mentioned in Section 3.3 that \\u201cwe found that model size significantly impacts performance, with the largest CLIP model yielding significantly better results\\u201d, it would be better to have some quantitative comparison between different sizes of CLIP models.\\n4. The majority of evaluations are conducted on datasets with well-defined and distinct classes (COCO, LVIS). Testing ClipGrader on more complex datasets, such as OpenImages, where bounding box quality varies more, would be better to show its generalizability.\", \"questions\": \"The idea to utilize CLIP to improve some downstream tasks like object detection is interesting, and this paper is overall good. It would be better to add some baseline results for more comprehensive comparison and validate the proposed method on more complex datasets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces ClipGrader, a novel framework that leverages vision-language models to automatically assess the accuracy of bounding box annotations in object detection datasets. It employs CLIP (Contrastive Language-Image Pre-training) to evaluate both class label correctness and spatial precision of bounding boxes, offering a scalable solution for enhancing annotation quality control. ClipGrader demonstrates high accuracy and robustness, achieving 91% accuracy on COCO with a 1.8% false positive rate, and effectively maintaining performance even when trained on a subset of the data. ClipGrader's can help downstream object detection tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"In general, the proposed method is somehow simple, also the author claimed that the proposed method can be helpful in downstream tasks, which can benefit the process of downstream object detection learning.\", \"weaknesses\": \"However, I think the proposed method has severe drawbacks as follows:\\n\\n1. The proposed method is simple and lack novelty. The author only applies a simple classification strategy and a simple contrastive learning pipeline. It is just a simple application of a pre-trained CLIP model. Neither the author proposes a new paradigm (contrastive learning with multiple positive pairs is a common practice in previous papers, especially object detection paper), nor the author has proposed new arch/algorithm to train a grader.\\n\\n2. For object detection datasets, the author only shows performances on COCO and LVIS datasets. COCO and LVIS share the same data sources. Then the performance, even the few-shot performance doesn't convince me here. The author could show results on downstream object detection benchmarks rather than the COCO source, for example, OpenImages and some small datasets to verify the effectiveness of the proposed method. i.e., on VOC dataset and some auto-matic driving datasets like video surveillance datasets. \\n\\n3. For the downstream SSOD teacher, though the CLIPGrader is also trained on 10% of the COCO data, CLIP is trained on multiple data sources, which can not prevent the leakage of the data. The author could find better ways to verify the effectiveness of CLIPGrader, i.e., find the noisy annotations in COCO (with ratio and visualization).\", \"questions\": \"Based on the weaknesses, I have the following questions?\\n\\nCould the proposed CLIPGrader achieves performance gain on other detection datasets rather than COCO/LVIS?\\n\\nWill the performance gain mostly from CLIP, rather than the proposed strategy? If so, applying a modern light-weight VLM will be better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper presents ClipGrader, a method leveraging vision-language models, specifically CLIP, to assess the quality of labels in object detection datasets. It evaluates the correctness of class labels and the spatial precision of bounding boxes, demonstrating its potential to refine datasets like COCO and LVIS. Additionally, ClipGrader's integration into semi-supervised object detection pipelines suggests practical benefits in improving pseudo-label quality.\\n\\nReviewers acknowledged the idea of adapting CLIP for label quality assessment as interesting and noted its scalability and practical implications for dataset refinement. However, concerns were raised about the lack of technical novelty, as the approach mainly repurposes CLIP without substantial innovation. The experiments were limited to datasets that share overlapping sources, restricting the method's generalizability. Reviewers also highlighted the absence of comparisons with alternative baseline methods and real-world \\\"in-the-wild\\\" error scenarios, questioning the robustness and broader applicability of the method. The authors did not submit a rebuttal during the response phase.\\n\\nConsidering these factors, the AC recommends rejection. While the method demonstrates promise in leveraging pre-trained models for dataset refinement, the lack of response and unaddressed concerns about technical contributions and experimental validation limit its impact and readiness for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to the meta review\"}"
]
} |
1GIVx7COef | Event-aided Dense and Continuous Point Tracking | [
"Zhexiong Wan",
"Jianqin Luo",
"Yuchao Dai",
"Gim Hee Lee"
] | Recent point tracking methods have made great strides in recovering the trajectories of any point (especially key points) in long video sequences associated with large motions.
However, the spatial and temporal granularity of point trajectories remains constrained by limited motion estimation accuracy and video frame rate.
Leveraging the high temporal resolution motion sensitivity of event cameras, we introduce event data for the first time to recover spatially dense and temporally continuous trajectories of any point at any time.
Specifically, we define the dense and continuous point trajectory representation as estimating multiple control points of curves for each pixel and model the movement of sparse events triggered along continuous point trajectories.
Building on this, we propose a novel multi-frame iterative streaming framework that first estimates local inter-frame motion representations from two consecutive frames and inter-frame events, then aggregates them into a global long-term motion representation to utilize input video and event data with an arbitrary number of frames.
Extensive experiments on simulated and real-world data demonstrate the significant improvement of our framework over state-of-the-art methods and the crucial role of introducing events for modeling continuous point trajectories. | [
"event camera",
"dense point tracking",
"continuous motion",
"motion representation"
] | https://openreview.net/pdf?id=1GIVx7COef | https://openreview.net/forum?id=1GIVx7COef | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rU7qoVsm4c",
"i3jcnr6jYs",
"QecvxOxQA2",
"AH9MEEOlp3"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1730611656640,
1730087845336,
1730724252534,
1731655314205
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1295/Reviewer_5NUL"
],
[
"ICLR.cc/2025/Conference/Submission1295/Reviewer_WR11"
],
[
"ICLR.cc/2025/Conference/Submission1295/Reviewer_zoGa"
],
[
"ICLR.cc/2025/Conference/Submission1295/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces a novel event-aided dense and continuous point tracking framework (EDCPT) that integrates the strengths of both image and event data to achieve high-resolution motion tracking in video sequences. The method proposes a multi-frame aggregation strategy for dense point tracking, leveraging event cameras to address temporal limitations in conventional video data. Through this approach, EDCPT can capture temporally continuous point trajectories, which is validated by experiments showing significant performance gains over existing state-of-the-art methods in dense tracking tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The method addresses a key limitation in point tracking by integrating event cameras, effectively combining their high temporal sensitivity with the spatial information of traditional video.\", \"The proposed multi-frame iterative streaming process for motion aggregation is well-designed and enables the model to adapt to variable video lengths.\", \"The paper provides comprehensive experiments on simulated and real-world datasets. The results convincingly demonstrate the advantages of using event data for fine-grained and continuous tracking, with significant improvements over baseline methods.\"], \"weaknesses\": [\"The reliance on event data limits the framework's flexibility, as it may not perform optimally without event camera input. This restricts its applicability to setups where event cameras are available.\", \"Some technical assumptions are not fully supported by the results. For instance, while the multi-frame aggregation is shown to improve performance, there is limited analysis of its specific contribution compared to simpler aggregation techniques.\", \"The framework\\u2019s computational cost, especially given the use of multi-frame input and dense tracking, could make it challenging for use in real-time applications, which is not fully addressed in the paper.\", \"The EDCPT framework is computationally demanding, limiting its real-time applicability in scenarios where immediate results are required. Additionally, its reliance on event cameras restricts its use to specific hardware configurations, reducing its flexibility. While the proposed method is validated on benchmark datasets, further testing in a broader range of real-world applications would strengthen the claims of generalizability. Finally, the integration of event data introduces complexity in the framework, which may pose challenges in deployment and necessitate robust calibration and setup procedures for optimal performance.\"], \"questions\": [\"How does the framework handle sparsity or noise in the event data? Since real-world event cameras often produce sparse or noisy data, it would be valuable to understand the robustness of the proposed method in these conditions.\", \"Some parts of the article are a bit vague and need more explanation. For example, the paper mentions an occlusion handling strategy but lacks quantitative evidence of its effectiveness. Could the authors provide more information on how occlusions are managed and evaluated?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper performs dense point tracking to estimate large motion in long video sequences using events. To solve this task effectively, a multi-frame iterative framework is proposed, which estimates inter-frame motion and uses an aggregation method to estimate global motion. This approach was evaluated on multiple datasets, highlighting its strengths and opening a new field in motion estimation for event cameras.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The writing in this paper is very straightforward, making it easy to read even for those new to events or point tracking. The structure is well-composed, with sufficient references, which is commendable.\", \"Another strength is in the evaluation on multiple datasets\\u2014not only on synthetic ones but also on real-world event datasets. This aligns with the evaluation protocols of many prior studies, which adds credibility.\", \"Various supplementary materials and extensive experimental data also enhance the paper's quality.\"], \"weaknesses\": [\"The novelty of the proposed method is difficult to discern. It appears to be a straightforward adaptation of prior point tracking methods with event stacking. For instance, compared to methods like FlowTrack or DOT, it\\u2019s challenging to see any distinctive or novel aspects. Specifically, the approach of estimating local motion and then accumulating it is an old technique, commonly used in optical flow and dense tracking.\", \"Another drawback is the lack of an inference time comparison, which is a common benchmark in prior protocols (e.g., in the FlowTrack paper). While comparing all methods on both synthetic and real datasets may be impractical, comparison with some representative studies is essential.\", \"Since there\\u2019s no dedicated event-based dense tracking dataset, the authors rely on synthetic datasets and evaluate event-based optical flow in real-world settings. However, this does not truly reflect event-based dense tracking, which is a significant weakness.\"], \"questions\": [\"The author needs to emphasize the distinct aspects of the framework compared to existing methods beyond merely adding events. Currently, these differences are hard to identify.\", \"Demonstrating the framework\\u2019s effectiveness through computational cost analysis would support that it\\u2019s more than just a parameter-heavy approach and instead an efficient method.\", \"While real-world experiments would be ideal, it\\u2019s understandable that this may be infeasible within the given timeframe.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a model for dense point tracking from a multimodal input consisting of RGB frames captured by a standard shutter camera and events coming from an event camera.\\n\\nIn order to represent the motion, the method proposes to parametrize point trajectories with B-Splines, predicting therefore the control points {P_i}_i=1...Nc to recover the curves {T_t}.\\n\\nTheir proposed \\\"local motion etimation\\\" model operates on pairs of adjacent frames I_t and I_{t+1} as well as the events happening between these adjacent frames E_{t->t+1}; and predicts local trajectories {T_{t->t+1}} between these adjacent frames. Then, these trajectories are sequentially combined to obtain {T_{1->t}}, a process which is sometimes aided by the current global motion representation M^{global} in case of occlusions (eq. 1). This global motion representation M^{global} is iteratively updated using the local motion representations M^{local} extracted by the \\\"local motion estimation\\\" module.\\n\\nThe model is trained on the synthetic MOVI-F dataset with 10k 7-frame videos during 500k steps, and evaluated on CVO-test and TAPVid-DAVIS. For the evaluation datasets, events are simulated using vid2e.\\n\\nQuantitative results show the proposed model can obtain SOTA performance on TAPVid-DAVIS and CVO point-tracking benchmarks, as well as in the DSEC optical flow leaderboard.\\n\\nThe authors also present ablation experiments for their global motion aggregation, curve representation and input data.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"While combining event data with RGB data had been used in previous work in optical flow, this is the first work using event data for long-range point tracking.\", \"The authors conduct experimental evaluations on standard point-tracking benchmarks and report SOTA results.\"], \"weaknesses\": [\"The method is not fully understandable nor reproducible with the details given. Figure 1 gives the reader the best guess about how the method works, but it is not really clear how the events are processed by the model, how the local motion representations are obtained, what is the trajectory decoder (L247) and how the global representations are are used for the final trajectory predictions.\", \"It's not clear how the events are obtained for the synthetic MOVI-F training data.\", \"Overall the paper is poorly written and difficult to understand. There are errors that show it was not carefully proofread, there are organization issues, and there are notation issues. For example, sec 3.1 speaks about the global motion representation without having introduced it. The notation in sec. 3.1 is also difficult to follow. For instance, it's not clear what the \\\"initial current global trajectory\\\" T^init_{1->t+1} means and how it is used, as it doesn't appear in any equation. There are also no details about that the Warp and Fusion operations in eq (1) are.\"], \"questions\": [\"Please explain how the events are used for the local correlation construction and how the M^{local} is computed.\", \"Please explain what are the warp and fusion operations in (1).\", \"Please explain how the global motion representations M^{global} are used.\", \"Please explain how events are obtained for the training data.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}"
]
} |
|
1FiMrJxPAM | A Super-Aligned Driving Generalist Is Your Cockpit | [
"Hao LU",
"Jiaqi Tang",
"Jiyao Wang",
"Yunfan LU",
"Qingyong Hu",
"Yin Wang",
"Tianxin Xie",
"Yuting Zhang",
"Yunpeng Zhang",
"Bin Huang",
"Dengbo He",
"Shuiguang Deng",
"Hao Chen",
"Ying-Cong Chen"
] | The intelligent driving cockpit, an important part of intelligent driving, needs to match different users' comfort, interaction, and safety needs. This paper aims to build a \textbf{s}uper-\textbf{a}ligned and \textbf{ge}neralist \textbf{dr}iving agent, \textbf{sage deer}. Sage Deer achieves two highlights: (1) Super alignment: It achieves different reactions according to different people's preferences and biases. (2) Generalist: It can understand the user's physiological indicators, facial emotions, hand movements, body movements, driving scenarios, and behavioral decisions. (3) Multimodal: He can understand RGB, NIR, and depth video to build more robust perception, understanding, and reasoning. To achieve the above requirements, we design retrieval-enhanced multimodal frameworks. We collected multiple data sets and built a large-scale benchmark. This benchmark measures the sage deer's perceptual decision-making ability and the super alignment's accuracy. | [
"Driving Cockpit; Super alined; Driving Generalist"
] | https://openreview.net/pdf?id=1FiMrJxPAM | https://openreview.net/forum?id=1FiMrJxPAM | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xs5dUHKPLo",
"oqRGVYP7kv",
"nuhq8HVZMJ",
"VmOcclJzWn",
"OkUuD3Xmoy",
"9D9Ze1mwZy",
"60xyH0IKwP"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730354729341,
1730275747841,
1730707031613,
1730689235847,
1730340766175,
1731655955952,
1730732451863
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission174/Reviewer_tfDg"
],
[
"ICLR.cc/2025/Conference/Submission174/Reviewer_ZJwV"
],
[
"ICLR.cc/2025/Conference/Submission174/Reviewer_Ara9"
],
[
"ICLR.cc/2025/Conference/Submission174/Reviewer_E6VX"
],
[
"ICLR.cc/2025/Conference/Submission174/Reviewer_ZXDa"
],
[
"ICLR.cc/2025/Conference/Submission174/Authors"
],
[
"ICLR.cc/2025/Conference/Submission174/Reviewer_s9po"
]
],
"structured_content_str": [
"{\"summary\": \"This paper presents a multi-modal LLM designed for human and scene understanding in autonomous driving. It integrates multi-view image and multi-modal inputs, using a retrieval-augmented generation strategy to improve test-time adaptation. For evaluation, a multi-modal, multi-view dataset for driving perception is introduced. The proposed method outperforms standard multi-modal LLM models.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The motivation to combine 3D scene perception with both visual and physiological signals from humans is clear and compelling. However, an ablation study on each signal and modality would enhance understanding of their individual contributions.\\n2. A dataset is established, incorporating multi-modal, multi-view sensor data along with QA pairs for evaluating the LLM\\u2019s scene understanding and reasoning capabilities.\\n3. The proposed method shows improved overall performance on the provided dataset.\", \"weaknesses\": \"Overall, I regret to say that this submission appears of low quality, with numerous errors suggesting it was submitted without thorough proofreading\\u2014a potential disservice to the reviewers.\\n\\n1. There are numerous typographical and formatting errors, including typos, incorrect notations, capitalization issues, and incomplete sentences. Examples include:\\n - L050: \\\"technologies, s possess\\\"\\n - L053: '\\\"with s,\\\"'\\n - L208: \\\"supplement the captain\\\"\\n - L261, L265: \\\"the language spaceemface \\u2208 C \\u00d7 L.\\\", \\\"emrgb \\u2208 C \\u00d7 L, < RGB bos > emrgb < RGB cos >.\\\"\\n - L271, L277: \\\"emfront \\u2208 C \\u00d7 L,\\\"\\n - L288: \\\"framework ash shown in Fig. 3\\\"\\n - L307: \\\"The Relationship Between Physiological State and Emotion: Classical\\\"\\n - L315-L317: \\\"other tasks, including: The Relationship Between Physiological State and Behavior\\u2026\\\" (repeated thrice)\\n\\n2. The proposed method lacks novelty, as it is essentially a multimodal LLM with RAG, without any specific design tailored for the target task. Additionally, key methodological details, such as training strategies, specific model architectures, and hyperparameters, are missing.\\n\\n3. Experimental analysis is limited. In-depth experimentation and analysis are needed to substantiate the claimed benefits of using a multimodal approach.\\n\\n4. The dataset setup is unclear. Since the captions are generated by open-source VLMs, please clarify the measures taken to ensure their quality.\\n\\n5. The related work citations do not consistently support the claims made. For instance, L308 references \\\"Classical studies in psychophysiology (e.g., James-Lange, Schachter-Singer)\\u2026\\u201d without sufficient context.\\n\\n6. The appendix section is empty. Please remove the placeholder text: \\\"You may include other additional sections here.\\\"\\n\\n7. Finally, as the dataset includes human subjects, please provide an ethics statement to address concerns regarding its use.\", \"questions\": \"Please see weaknesses section.\\n\\nAdditionally, real-world dataset construction rarely captures abnormal behaviors. How, then, does training on the proposed dataset support effective human behavior anomaly detection?\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"I recommend this paper be flagged for an ethics review due to concerns related to the proposed dataset, which includes human subjects. Key ethical considerations include:\\n\\n1) The dataset's inclusion of human data raises concerns about compliance with copyright laws, data protection standards, and consent protocols under regulations like GDPR.\\n\\n2) The use of human data requires careful consideration of ethical research practices, including whether informed consent was obtained, how the data will be stored, and the responsible handling and potential release of this data.\\n\\nTo ensure an ethically sound review, an ethics reviewer with expertise in privacy, legal compliance, and responsible research practices would be most suitable.\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes to leverage multimodal data and LLMs to understand driver physiology, emotions, and behaviors in real-time. The authors use a RAG framework combined with expert knowledge integration to provide personalized feedback.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1\\u3001The paper introduce an interesting problem, emphasizing the importance of individual preferences in enhancing the driving experience.\\n2\\u3001The RAG framework addresses the need for flexible, personalized responses without extensive model fine-tuning.\\n3\\u3001The proposed method was evaluated on multiple driving datasets for its generalist and super-aligned performance.\", \"weaknesses\": \"1\\u3001The paper shows limited novelty, like RAG, are pre-existing approaches.\\n2\\u3001The \\\"Expert Knowledge Fusion\\\" section isn\\u2019t clearly explained. Adding pseudocode or a flowchart could make it easier to follow.\\n3\\u3001The paper lacks ablation studies to verify the effectiveness of individual modules, such as physiological indicators and expert knowledge fusion.\", \"questions\": \"1\\u3001Can you explain why the paper chose ResNet18 as the pre-trained model instead of a more powerful option?\\n2\\u3001With multimodal data and personalized preferences present, how does RAG decide which information to prioritize for retrieval? Is there a specific prioritization or weighting system?\\n3\\u3001Could Sage Deer be compared more thoroughly with other recent intelligent driving agents, like DriveGPT or DriveLM, to provide a deeper understanding of its performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper aims to build a super-aligned and generalist driving agent, called sage deer for the intelligent driving cockpit. A new dataset is constructed for many tasks, e.g., physiological estimation, emotional estimation, gesture estimation, body motion estimation, driving behavior estimation, driving behavior detection, and driving decision-making. An MLLM is trained for unified tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The paper is the first to construct a unified dataset for MLLMs in intelligent driving cockpit. A multi-task dataset is provided and an MLLM is trained on the dataset.\"], \"weaknesses\": [\"The dataset construction part is extremely lacking in details, including data curation, GPT4 labeling, etc. The paper states in many places that the details are in supplementary materials, but the supplementary materials have not been submitted. Besides, the contribution of \\\"An intelligent driving cockpit super alignment evaluation protocol involving generalization ability for different needs was established\\\" cannot be well-established in the paper.\", \"The qualitative results are very limited. Only in Fig. 4, some conversations are provided, and from this figure, we cannot know the full ability of the model.\", \"The writing of the paper was very hasty. many sentences are not clear and typos are everywhere, e.g., \\\"serves as the interface for human interaction with s\\\" in L053, and \\\"Tokenizing Multi-Model\\\" in L257.\"], \"questions\": \"See weaknesses. Please provide more details as much as possible.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This research presents \\\"Sage Deer,\\\" an innovative super-aligned and generalist driving agent designed to enhance intelligent cockpit systems. The proposed framework addresses the challenges of personalized driving experiences and comprehensive omni-modal information processing through several key innovations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Strength:\", \"Seamlessly integrates data from various sensors (RGB, NIR, depth cameras) and multiple perspectives, enabling comprehensive environmental understanding.\", \"Employs a unique mechanism that combines an updatable knowledge base with a large language model, enabling contextually relevant responses without extensive fine-tuning.\"], \"weaknesses\": [\"Weakness:\", \"The authors should provide more comprehensive information about the model architecture, including specifics such as the choice of LLM with its size and so on.\", \"In Figure 3, a \\\"Pre-trained Video Encoder\\\" is depicted, whereas Section 4.2 mentions the use of an \\\"ImageNet pre-trained ResNet18.\\\" Are these referring to the same component? Additionally, how does this encoder handle other modalities? Lastly, how many tokens does the encoder output? Providing more detailed explanations would enhance understanding.\", \"In Section 4.1, the author introduces specific start and end symbols to denote different modalities. Are these symbols newly added special tokens to the LLM's vocabulary? If so, how are these tokens initialized? Since the LLM remains frozen and is not further trained, how does the pretrained model recognize these new tokens?\", \"In Section 5.2, the maximum sentence length is set to 64. How was this value determined? Since text sentences are processed by a tokenizer, why not base this parameter on the number of tokens instead? Were any experiments conducted to evaluate the impact of this choice on performance or the training and inference computational budget?\", \"The sequence of tables and figures should be adjusted for consistency. For instance, Table\\u202f2 is only mentioned in Section\\u202f5.5, while Tables\\u202f3 and\\u202f4 are referenced earlier in the document before Table\\u202f2.\", \"The manuscript requires improved writing quality, as numerous typographical errors are present. For example, on line\\u202f414, \\\"model\\\" should be corrected to \\\"figure,\\\" and on line\\u202f261, a space is needed between the text and the equation.\", \"The manuscript currently contains several typographical and writing errors, as well as some missing details, which is not ready for submission. I believe it would benefit from further revisions to address these issues and ensure it meets the standards required for submission to ICLR.\"], \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces \\\"Sage Deer,\\\" an intelligent driving cockpit system aimed at meeting personalized user needs through a multi-modal framework and a retrieval-augmented generation mechanism.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The concept of Sage Deer as a super-aligned, generalist agent offers a fresh approach to intelligent cockpit systems, adapting in real-time to individual user preferences.\\n2. The tailored application of a retrieval-augmented generation framework for the driving domain is a notable contribution, enabling efficient and adaptive responses to evolving user needs.\\n3. The development of a large-scale benchmark using a variety of datasets (AIDE, DMD, and others) to assess the system's decision-making and perception capabilities adds rigor and depth to the system\\u2019s evaluation.\", \"weaknesses\": \"1. How are the various inputs (e.g., visual, physiological) integrated to influence real-time driving decisions?\\n2. Could the paper delve deeper into how user interactions are managed, especially in complex scenarios? Are there any limitations to the system\\u2019s ability to interpret nuanced or less common user behaviors?\\n3. There are a few errors: for example, the purpose of \\\"s\\\" in lines 50 and 53 is unclear, \\u201cOut View Caption\\u201d is duplicated in Figure 2, and \\u201cAccurate labele\\u201d contains a spelling error.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you for your effective suggestions. We will further improve the quality of our papers with these suggestions.\"}",
"{\"summary\": \"The paper introduces Sage Deer, a multi-modal, multi-view framework for intelligent driving cockpits, designed to provide personalized, context-aware assistance. It integrates RGB, NIR, and depth cameras, and captures diverse data on driver states such as physiology, emotion, and behaviour which enables comprehensive monitoring and real-time response. This data is processed through a language model, allowing for nuanced comprehension and interaction capabilities.\\n\\nThe system\\u2019s architecture relies on three core components: retrieval-augmented generation (RAG), multi-modal fusion, and expert knowledge incorporation. RAG allows Sage Deer to retrieve relevant external information, tailoring responses to user preferences. Multi-modal fusion combines data from various camera views, enhancing the model's understanding of the environment and driver states. Expert knowledge fusion further refines Sage Deer\\u2019s outputs by integrating specialized insights into physiological and emotional monitoring, optimizing its response relevance and accuracy.\\n\\nExperimental results demonstrate Sage Deer\\u2019s effectiveness in multitasking and adapting to diverse user needs, providing a benchmark for intelligent cockpit design. By aligning AI capabilities with user-centered safety requirements, Sage Deer advances the potential of personalized driver assistance systems, positioning itself as a foundational technology for future ADAS applications.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Sage Deer integrates multi-modal and multi-view data sources, combining RGB, NIR, and depth cameras to achieve a highly adaptive and personalized intelligent cockpit system. The model\\u2019s use of Retrieval-Augmented Generation (RAG) allows it to pull relevant context-specific information from external sources, enhancing the system\\u2019s real-time responsiveness and ability to deliver highly accurate, personalized interactions aligned with individual driver preferences. This capacity for personalization goes beyond standard cockpit systems, as Sage Deer monitors physiological, emotional, and behavioural states to tailor responses to the driver's unique profile, significantly boosting both user engagement and safety.\\n\\nThe fusion of diverse sensor data enables Sage Deer to accurately perceive and interpret complex, dynamic conditions within and outside the vehicle, making it capable of maintaining performance under varying lighting and environmental scenarios. Its robust, real-time capabilities show substantial potential for practical applications in ADAS, offering intelligent, responsive support that adapts continuously to real-world challenges. Sage Deer\\u2019s architecture sets a new standard for intelligent cockpit systems, bringing together advanced AI components to enhance driver experience and overall vehicle safety in ways that align with the evolving demands of autonomous and semi-autonomous vehicles.\", \"weaknesses\": \"The paper needs improvement in writing. There are mistakes and citation errors. See at the end of this message.\\n\\nThe main issue is the novelty. It seems to combine multiple models to improve intelligent driving. This contribution is not good enough for a conference like ICLR.\\n\\nIn (Section 4.3), the authors mentioned that the model relies on visual tokenization of physiological data, such as heart rate and blood oxygen levels, to infer emotional or behavioural states. However, this approach assumes direct correlations with emotions, potentially leading to inaccuracies. While the authors have cited studies in psychophysiology suggesting links between physiological signals and emotions, the real-world application requires greater nuance. Authors should discuss impact on signals due to factors like individual baselines, environmental conditions, and physical activity.\\n\\nThe model\\u2019s use of a pre-trained ResNet18 for tokenizing RGB, NIR, and depth inputs may lack the capacity to capture the complex nuances needed for an intelligent cockpit system. To address this, the authors should conduct ablation studies comparing ResNet18 with advanced models like ResNet50, EfficientNet, ViT, and Swin Transformer to assess improvements in accuracy and robustness. Additionally, the current concatenation-based fusion strategy may underutilize the complementary data from multi-modal inputs. Testing different fusion techniques, such as attention-based and cross-attention methods, could identify more effective integration approaches. Further analysis of each modality\\u2019s impact would clarify the significance of RGB, NIR, and depth data, while transformer-based models could improve temporal understanding for tasks like fatigue tracking.\\n\\nThe reliance on a language model for contextual understanding may oversimplify dynamic driving scenarios, missing essential non-verbal cues for real-time safety. Ablation studies could address this by comparing language-only input to multi-modal input (e.g., visual, physiological, behavioural data) to assess non-verbal contributions to accuracy in safety-critical tasks. Testing each modality individually would highlight their impact while comparing the language model with and without RAG would clarify RAG\\u2019s role in context accuracy.\", \"writing\": \"\", \"line_37\": \"He -> it\", \"line_50\": \"s possess?\", \"line_53\": \"with s?\", \"line_61\": \"reference a?\", \"line_66\": \"repeat of Sima et al.\", \"line_188\": \"Beachmarking -> Benchmarking\", \"questions\": \"1. The novelty of the contribution.\\n2. In Section 4.3, the model uses visual tokenization of physiological signals to infer emotions. How does the model account for individual differences in physiological baselines or external factors (e.g., physical activity, environmental conditions) that might affect these signals?\\n3. Why was ResNet18 chosen over more advanced models like ResNet50, EfficientNet, or transformer-based architectures? Did you conduct any initial tests with these models?\\n4. Would you consider performing ablation studies comparing ResNet18 with more powerful feature extractors to evaluate improvements in capturing behavioral and environmental nuances?\\n5. How does ResNet18 perform in capturing temporal dependencies in sequential data, particularly for tasks requiring context awareness over time, such as fatigue tracking?\\n6. Given the current use of a concatenation-based fusion approach, have you explored other fusion techniques, such as attention-based fusion or cross-attention mechanisms, to maximize the complementary data from RGB, NIR, and depth inputs? Have you considered ablation studies to evaluate the impact of each modality independently?\\n7. How does the model handle or prioritize input from non-verbal cues compared to language-based cues in dynamic driving contexts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Not applicable\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1Ffzgglq2I | Binary Reward Labeling: Bridging Offline Preference and Reward-Based Reinforcement Learning | [
"Yinglun Xu",
"David Zhu",
"Rohan Gumaste",
"Gagandeep Singh"
] | Offline reinforcement learning has become one of the most practical RL settings. However, most existing works on offline RL focus on the standard setting with scalar reward feedback. It remains unknown how to universally transfer the existing rich understanding of offline RL from the reward-based to the preference-based setting. In this work, we propose a general framework to bridge this gap. Our key insight is transforming preference feedback to scalar rewards via binary reward labeling (BRL), and then any reward-based offline RL algorithms can be applied to the dataset with the reward labels. The information loss during the feedback signal transition is minimized with binary reward labeling in the practical learning scenarios. We theoretically show the connection between several recent PBRL techniques and our framework combined with specific offline RL algorithms. By combining reward labeling with different algorithms, our framework can lead to new and potentially more efficient offline PBRL algorithms. We empirically test our framework on preference datasets based on the standard D4RL benchmark. When combined with a variety of efficient reward-based offline RL algorithms, the learning result achieved under our framework is comparable to training the same algorithm on the dataset with actual rewards in many cases and better than the recent PBRL baselines in most cases. | [
"Preference based reinforcement learning; Offline reinforcement learning"
] | https://openreview.net/pdf?id=1Ffzgglq2I | https://openreview.net/forum?id=1Ffzgglq2I | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vFQY8IFNmc",
"vBgcN1CZcQ",
"WlJtQm0aG0",
"TlYwRHDhqu",
"8os4eMBKTS"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730722003049,
1730689930733,
1729575927492,
1733350345550,
1729397421428
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12272/Reviewer_hhtM"
],
[
"ICLR.cc/2025/Conference/Submission12272/Reviewer_j78D"
],
[
"ICLR.cc/2025/Conference/Submission12272/Reviewer_KHTZ"
],
[
"ICLR.cc/2025/Conference/Submission12272/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12272/Reviewer_Eokm"
]
],
"structured_content_str": [
"{\"summary\": \"The paper presents a novel framework aimed at bridging the gap between offline preference-based reinforcement learning (PBRL) and standard offline reward-based reinforcement learning (RL). The authors propose a method called Binary Reward Labeling (BRL), which transforms preference feedback into scalar rewards, allowing the application of any reward-based offline RL algorithm to datasets with reward labels. The key insight is simply relabel the reward function with $\\\\pm 1$ using preference labels. The paper provides theoretical connections between PBRL techniques and the proposed framework combined with specific offline RL algorithms. Empirical tests on preference datasets based on the D4RL benchmark demonstrate that the framework's performance is comparable to training on datasets with actual rewards and superior to recent PBRL baselines in many cases.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper introduces a simple and unique method for translating preference feedback into a format that can be used by standard offline RL algorithms, which is a significant step forward in the field of PBRL.\", \"The authors provide a theoretical analysis that connects their framework with existing PBRL techniques, providing an interesting point of view and adding depth to the understanding of how preference information can be utilized in RL.\"], \"weaknesses\": \"- The paper suffers from poor writing quality and formatting issues, which detract from the overall presentation and readability. For example, in Definition 4.2, there should be a period after \\\"reward modeling in model-based approaches,\\\" and the comma should not appear at the start of a line. The subtitle \\\"Offline standard RL algorithms are model-based.\\\" in Section 4.2 can be misleading.\\n\\n- The soundness of the proposed method is questionable. While the $\\\\pm 1$ reward labeling is theoretically correct, it is usually not a good choice to overfit the preference dataset. Having a more rigorous analysis under the function approximation scenario would be nice.\\n\\n- The paper needs some benchmarks and baselines to validate the effectiveness of the proposed method. For benchmarks, The D4RL benchmark is known to be insensitive to the accuracy of the reward function [1], and adding benchmarks like Meta-World would greatly strengthen the paper. Also, there are some recent works on offline PbRL that have a strong performance, like [2,3], and BRL should be compared with them.\\n\\n\\nReferences\\n\\n[1] Li, Anqi, et al. \\\"Survival instinct in offline reinforcement learning.\\\" Advances in neural information processing systems 36 (2024).\\n\\n[2] Kim, Changyeon, et al. \\\"Preference transformer: Modeling human preferences using transformers for rl.\\\" arXiv preprint arXiv:2303.00957 (2023).\\n\\n[3] Zhang, Zhilong, et al. \\\"Flow to better: Offline preference-based reinforcement learning via preferred trajectory generation.\\\" The Twelfth International Conference on Learning Representations. 2023.\", \"questions\": \"See the Weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This manuscript introduces a novel framework aimed at addressing the challenge of transferring knowledge from reward-based to preference-based offline reinforcement learning (PBRL). The authors highlight that while offline RL has gained practical significance, most research has been limited to scalar reward feedback, leaving a gap in understanding how to apply offline RL techniques to preference-based settings. The proposed solution involves converting preference feedback into scalar rewards through binary reward labeling (BRL), which allows the application of any reward-based offline RL algorithms to datasets with these labels. This approach minimizes information loss during the transition from preference to scalar rewards. The paper establishes theoretical connections between recent PBRL techniques and the proposed framework when combined with specific offline RL algorithms, suggesting that the framework can yield new and more efficient offline PBRL algorithms. Empirical tests on preference datasets from the D4RL benchmark demonstrate that the framework's performance, when combined with various efficient reward-based offline RL algorithms, is often comparable to training on datasets with actual rewards and superior to recent PBRL baselines in most cases.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work investigate an important problem and conduct the theoretical analysis for the method.\", \"weaknesses\": \"1. The writing of this work is not good. I get confused for many spaces. What is link function? What is link-loss function? The writing of Section 4.1 is very confusing and incomprehensible. The pseudocode is too concise.\\n\\n2. Missing a lot of baseline algorithms. For example, OPRL [1] and PT [2].\\n\\n[1] Shin, Daniel, Anca D. Dragan, and Daniel S. Brown. \\\"Benchmarks and algorithms for offline preference-based reward learning.\\\" arXiv preprint arXiv:2301.01392 (2023).\\n\\n[2] Kim, Changyeon, et al. \\\"Preference transformer: Modeling human preferences using transformers for rl.\\\" arXiv preprint arXiv:2303.00957 (2023).\", \"questions\": \"1. Can you evaluate your algorithms on various domains? For example, Antmaze, Kitichen and Adroit?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper discusses the problem of acquiring a reward function from offline preference datasets. The authors claim that binary reward labelling is sufficient for solving this problem. Results on D4RL demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The problem of reward labeling from preference labels is a fundamental challenge in offline PBRL.\\n2. The performance improvement is impressive.\", \"weaknesses\": \"1. Presentation is poor. The citations are poorly formatted and hard to read.\\n2. Lack of discussion about comparison against the commonly used BT reward model. The contribution is poorly justified.\\n3. The authors claim that \\\"For the baseline methods, to the best of our knowledge, no existing empirical study works in exactly\\nthe standard offline PBRL setting considered in our work\\\". However, there have been massive studies on offline preference-based RL, such as PreferenceTransformer (https://arxiv.org/pdf/2303.00957) and OPRL (https://arxiv.org/pdf/2301.01392) and can be readily adopted into the experiment framework.\\n4. (https://proceedings.neurips.cc/paper_files/paper/2023/file/c3e969ea20542a6a11e6caeac736a0b9-Paper-Conference.pdf) reveals that D4RL tasks are not sensitive to reward labels. So the empirical results may not be convincing.\", \"questions\": \"1. Why does the binary reward outperform BT model? Will the empirical results still hold in more complex tasks such as Meta-World?\\n2. How do baseline methods such as Preference Transformer perform on the benchmarks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We appreciate the comments from the reviewers and will incorporate them in the next version of our work\"}",
"{\"summary\": \"This paper proposes a binary-encoding-based reward model learning method for preference-based reinforcement learning. The method demonstrates superior performance in both overlapping and non-overlapping trajectory scenarios.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1) Theoretical: In the case of non-overlapping trajectories, the relationship between the binary-encoding-based reward model and the traditional reward model is established.\\n\\n2) Experimental: The performance of the algorithm is simulated under both overlapping and non-overlapping trajectory scenarios.\", \"weaknesses\": \"1) Writing: The sections on related work and theoretical foundations are overly redundant. Some statements, particularly in the introduction, are inaccurately expressed. For example, current offline PbRL methods primarily focus on reward model learning, rather than on the policy learning aspect itself. For example, in lines 47-49 and 72-76 of the paper.\\n\\n2) Motivation: The motivation of the paper is unclear. The authors state that the main goal is to develop a framework to bridge the gap between PbRL and standard RL, allowing a standard offline RL algorithm to address the PbRL problem. However, the primary motivation behind PbRL is to resolve the challenge of setting rewards in standard RL. The difficulty in PbRL lies in accurately learning rewards from human preferences, which is not a problem that standard offline RL addresses. The author could approach this from the perspective of overlapping (or similar) trajectories and inconsistent labels, which might lead to a more effective explanation.\\n\\n3) Theory: Theoretical 4.5 only considers the case of non-overlapping trajectories and does not account for the scenario of overlapping trajectories with inconsistent labels.\\n\\n4) Experiments: The dataset is limited, with experiments conducted solely in the mujoco tasks. The paper does not compare results with cutting-edge PbRL methods, such as PT ( Preference transformer: Modeling human preferences using transformers for rl).\", \"questions\": \"1) Please authors further clarify the motivation of this paper. (This is the main question)\\n\\n2) How does the algorithm perform in cases where trajectories overlap and labels are inconsistent? The author could discuss how their theoretical results might extend to or be limited by scenarios with overlapping trajectories.\\n\\n3) What are the advantages of the binary-encoding-based reward model compared to the traditional reward model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1FY1apsMxc | LLM as GNN: Graph Vocabulary Learning for Graph Foundation Model | [
"Xi Zhu",
"Haochen Xue",
"Ziwei Zhao",
"Mingyu Jin",
"Wujiang Xu",
"Jingyuan Huang",
"Qifan Wang",
"Kaixiong Zhou",
"Yongfeng Zhang"
] | Graphs typically exhibit distinctive structure and domain-specific knowledge, motivating the development of a Graph Foundation Model (GFM) capable of generalizing across various graphs and tasks. While recent efforts have focused on combining the strengths of Large Language Models (LLMs) and Graph Neural Networks (GNNs), they often struggle to maximize mutual benefit due to the decoupled architectures. Moreover, existing methods assign out-of-vocabulary (OOV) tokens to nodes, which are incompatible with the natural language vocabulary for task-oriented prompt generation, hindering knowledge transfer in GFM. In this paper, we introduce PromptGFM, a versatile GFM grounded in graph vocabulary learning, comprising two key components: (1) Graph Understanding Module, which explicitly replicates the finest GNN workflow in the language space using LLMs, enabling seamless GNN-LLM integration and elegant graph-text alignment; (2) Graph Inference Module, where we establish a novel language-based graph vocabulary to ensure expressiveness, transferability, and scalability. This vocabulary enables the generation of readable instructions for LLM inference, resolving modality incompatibility and facilitating positive transfer. Extensive experiments demonstrate the superiority of PromptGFM in node classification and link prediction, along with its strong transferability across different datasets and tasks. The code is available at \url{https://anonymous.4open.science/r/PromptGFM}. | [
"large language model",
"foundation model",
"graph neural networks"
] | https://openreview.net/pdf?id=1FY1apsMxc | https://openreview.net/forum?id=1FY1apsMxc | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"qUpj3auLFz",
"oJ8nFSwVeF",
"aXsEChKi14",
"SQTuvUnQAW",
"OSSZatvdut",
"O8o41Ofsos",
"EzgeyB2o9W",
"DlIycSRlb2",
"DSHg1rneDI",
"Aq1iAzdCBe",
"9q702U7MZz",
"6IGjRjiXmz",
"0X80rU46LT"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1732486614274,
1732484082281,
1732483012150,
1730011384247,
1730619483777,
1732539021767,
1732575791453,
1730087665987,
1730023038677,
1732483155018,
1733293080219,
1732509946718,
1733293206781
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12885/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12885/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12885/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12885/Reviewer_gQ3P"
],
[
"ICLR.cc/2025/Conference/Submission12885/Reviewer_p4FX"
],
[
"ICLR.cc/2025/Conference/Submission12885/Reviewer_gQ3P"
],
[
"ICLR.cc/2025/Conference/Submission12885/Reviewer_p4FX"
],
[
"ICLR.cc/2025/Conference/Submission12885/Reviewer_9vAr"
],
[
"ICLR.cc/2025/Conference/Submission12885/Reviewer_xm9X"
],
[
"ICLR.cc/2025/Conference/Submission12885/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12885/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12885/Reviewer_9vAr"
],
[
"ICLR.cc/2025/Conference/Submission12885/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer 9vAr\", \"comment\": \"Thank you so much for your review. In the following, we clarify the misunderstanding and address your concerns.\\n\\nW1. (1) Sorry for your misunderstanding. As mentioned in the Introduction Section, our proposed PromptGFM is a graph foundation model for text-attributed graphs, which is the most studied setting in this research field. (2) Regarding dataset domains, most works in this field focus on how different domains incorporate varying semantics rather than the specifics of dataset construction. For instance, a recent comprehensive survey [1] adopts a similar approach, categorizing Cora/Citeseer/Arxiv as the CS citation domain and assigning PubMed to the MISC domain. We will consider including more datasets in future work.\\n\\n[1] Chen, Zhikai, Haitao Mao, Jingzhe Liu, Yu Song, Bingheng Li, Wei Jin, Bahare Fatemi et al. \\\"Text-space Graph Foundation Models: Comprehensive Benchmarks and New Insights.\\\" arXiv preprint arXiv:2406.10727 (2024).\\n\\nW2. Thank you for your suggestion. The table presents node classification performance, showing comparable results. 'N/A' indicates that the results are not reported in the original paper. Please note that TAPE cannot handle link prediction tasks.\\n\\n| Model | Cora | Citeseer | PubMed | obgn-arxiv |\\n| -------- | -------- | -------- | -------- | -------- |\\n| TAPE | 92.90\\u00b13.07 | N/A | 96.18\\u00b10.53 | 77.50\\u00b10.12 |\\n\\nQ1. Thanks for your question. (1) Following a comprehensive analysis in Section 4.2, we directly use the final textual representations as language-based IDs, which is composed of a finite sequence of langugae tokens. (2) We do not specify a fixed number of tokens for the final textual representation. However, in practice, it typically ranges from 10 to 20 tokens, depending on the number of layers of the prompt-based GNN.\"}",
"{\"title\": \"Response to Reviewer gQ3P\", \"comment\": \"Thanks for your review. We clarify your misunderstanding as follows.\\n\\nW1. Apologies for the confusion. As discussed in the Introduction, PromptGFM is specifically designed for text-attributed graphs and can be applied to any type of text-attributed graph, regardless of the domain.\\n\\nW2. Sorry for confusion. We would like to clarify our work aims to do graph foundation model for text-attributed graphs only. After multi-round message passing, each node can be represented by a finite sequence of language tokens and has embodied structural information, which is called language-based IDs. As language tokens are universal and transferable in NLP, the language-based IDs for nodes are also universal across different text-attributed graphs. With universal textual representations, we can construct a large number of task-specific instructions using pure language tokens, and put them in a single multi-instruction fine-tuning framework, adapting to various graphs and tasks. This is how our method works as a universal graph model.\\n\\nW3. Sorry for your misunderstanding. As mentioned in the Introduction Section, our proposed PromptGFM is a graph foundation model for text-attributed graphs, which is the most studied setting in the domain. We will highlight this setting for better clarification.\\n\\nW4. Thanks for your question. We highlight our contributions as follows. (1) Our work represents an attempt at LLM as GNN, where we use prompts to guide LLMs in replicating message passing for each node, aligning with the fundamental principles of existing GNN models. This approach demonstrates how LLMs can function as GNNs, fundamentally differing from existing LLM for GNN and GNN for LLM models, where GNNs and LLMs operate independently. Our approach represents an entirely new paradigm and an unexplored concept; it is by no means a simple direct combination of LLM and GNN. (2) By implementing LLM as GNN in the textual space, our approach addresses out-of-vocabulary issues present in previous works and enables the use of pure language prompt instructions for LLM inference. This resolves modality incompatibility and facilitates positive transfer, significantly advancing the development of GFMs.\"}",
"{\"title\": \"Response to Reviewer p4FX (1)\", \"comment\": \"We truly appreciate your constructive feedback and suggestions. In the following, we clarify the misunderstanding and highlight our contributions.\\n\\nW1. Thanks for you comments. (1) Our method differs significantly from [1, 3] in several key aspects. Specifically, in OFA [1], the authors use an LLM to transform each node into a fixed-length vector and perform message passing over a prompted subgraph in the embedding space. In contrast, PromptGFM introduces two key innovations: (a) Message passing is conducted in the textual space instead of the embedding space, an approach unexplored in prior work. (b) By stacking multi-layer LLM prompts, each node can theoretically capture high-order signals with rich semantics across the entire graph, rather than being limited to a subgraph as in OFA. Regarding \\\"Talk Like a Graph\\\" [3], while it establishes a benchmark by encoding graph data as text, it focuses solely on reasoning tasks and neglects graph understanding, such as message passing. Additionally, this work directly inputs all graph structural information (e.g., adjacency matrix, social networks) into a single prompt. This approach is clearly constrained by the LLM's allowable input length, rendering it impractical for real-world applications. In contrast, PromptGFM explicitly conveys neighboring information node by node via prompts, effectively replicating message passing\\u2014an approach not explored in prior works. (2) In each prompt, we include only the textual representations of the central node and its neighboring nodes from the previous layer to generate the textual representations for the current layer. By stacking multiple rounds of prompts to LLMs, we effectively capture long-range dependencies across the graph, similar to the workflow of GNNs. Additionally, inspired by GraphSAGE, we sample a limited number of 1-hop neighboring nodes for each node's prompt. This approach stays within ChatGPT's length constraints and avoids practical issues associated with long sequences. (3) We have included the templates of prompts for different functionalities in Appendix C. By feeding the title and abstract of each paper, we prompt LLM to construct the initial textual representation of each node. The second prompt in Appendix C illustrate how we replicate the message passing of each GNN layer, which is exactly the same as Figure 3. Additionally, we decompose this prompt to show how it align with the authentic GNN in Figure 2. For more clear demonstration, we will add a case study in Appendix F.1 to make it align with the example in the main content. It shows how we perform layer-by-layer message passing in the textual space and how information is aggregated from neighboring nodes.\\n\\nW2. Thanks for your valuable suggestions. (1) For example, in the first prompt of Appendix C, we use text summarization to generate the initial textual representations for each node, analogous to initializing dense vector embeddings in embedding-based GNNs. Building on this, as shown in the second prompt, we employ aggregation and update processes to guide the LLM in performing GNN-style message passing, moving beyond simple summarization. (2) Theoretically, at each layer, we emulate the loss function in GraphSAGE by employing prompts such as \\\"Note connected nodes should share similar semantics and vice versa.\\\" These cumulative, layer-by-layer prompts are analogous to layer-wise loss combinations, formally akin to mean pooling across all GNN layer embeddings, and mathematically extend beyond the textual representation in the final layer. As demonstrated in [4][5], such structural information can be effectively preserved. Experimentally, in Table 9 of Appendix F.2, we present the final textual representations alongside the original keywords from selected papers (the keywords are not included in the prompts and serve as ground truth). The observed similarity demonstrates that our prompt-based GNN effectively captures the core semantics of these nodes.\\n\\nW3. Apologies for any misunderstanding. We would like to clarify that our work focuses exclusively on GFM for text-attributed graphs. In this framework, each node corresponds to a textual representation, referred to as a language-based ID, which is applicable and comparable across all text-attributed graphs and warrants the development of a universal vocabulary for such graphs. This vocabulary enables the construction of pure language instructions, which are crucial for the subsequent multi-instruction fine-tuning framework for LLM inference. Our approach differs from [1,2] and is not applicable to [3] (as indicated in the response above).\"}",
"{\"summary\": \"The paper presents PromptGFM, an approach for integrating LLMs with GNNs by developing a language-based graph vocabulary. It aims to resolve limitations in current GNN-LLM architectures and demonstrates competitive performance in node classification and link prediction.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Novel attempt at LLM-GNN unification using natural language as a graph vocabulary.\\n2. Promising results on basic benchmarks.\", \"weaknesses\": \"1. The paper claims applicability to all graph types, though it only demonstrates effectiveness on text-attributed graphs.\\n2. Lacks evidence that the method generalizes to non-textual graphs, which is critical given the claim of a \\\"universal\\\" graph model.\\n3. How does the model handle graphs without inherent text attributes?\\n4. Can the authors provide clarity on novel contributions beyond combining existing techniques?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a graph foundational model. The proposed method employs one LLM to replicate a GNN\\u2019s workflow through prompting and another LLM for fitting downstream tasks. Specifically, they replicate the message-passing and local aggregation processes of a multilayer GNN by summarizing input using an LLM, prompting sampled one-hop neighborhoods of nodes to the LLM, and prompting it for aggregation across the neighborhoods. To mitigate the problem of out-of-vocabulary (OOV) faced by LLMs observing unseen nodes, they introduce graph vocabulary learning by making language-based IDs. This graph vocabulary is used for making prompts for the inferencing LLM. Finally, to increase generalization the LLM of inference module is fine-tuned on different tasks and datasets by multi-instruction learning\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper introduces a novel approach by employing multilayer LLMs to replicate the message-passing process of GNNs.\", \"The authors tackle an important issue of OOV tokens in LLMs when applied to graph tasks.\", \"Experimental results demonstrate the superiority of their model compared to existing LLM + GNN architectures.\", \"The paper does a comprehensive review of the current methods and the baselines are state-of-the-art.\"], \"weaknesses\": [\"While the proposed method demonstrates clear superiority over current state-of-the-art techniques, there are several significant concerns regarding the paper that need to be addressed:\", \"While the use of multiple layers of LLMs to replicate message passing across multi-hop structures is a novel approach, the fundamental concept of prompting LLMs for message passing in 1-hop neighborhoods is well-explored with similar methods [1, 3]. The authors should clearly distinguish their \\u201cgraph understanding\\u201d module from existing techniques. Notably, their method incorporates nodes sequentially, which raises practical concerns for large graphs due to the maximum input length limitations of LLMs and the associated performance decline when handling long sequences. For example, showing how long-range dependencies and long sequences can be captured by their understanding module would discriminate from previous works. Additionally, the authors do not provide a clear example of a template showing how a node's textual representation is constructed. The examples in Figure 3 and the Appendix are not sufficiently informative and differ significantly from the case study provided. A concrete example that aligns with the templates outlined in the paper would greatly enhance understanding of the method. Concerning this, the authors would provide special tokens, words, phrases or more details of matching input text to a unified template.\", \"The prompt examples provided in the paper, along with the case study, illustrate that the graph understanding module summarizes the input text sequentially across multiple rounds. However, in GNNs, information is propagated through message passing rather than summarized. As a result, the LLM has not effectively replicated the message-passing and aggregation processes. Additionally, because the graph understanding module utilizes a non-deterministic LLM, some nodes and their associated feature and structural information may be lost across multiple layers. Consequently, retrieving the embeddings of all nodes after several rounds of prompting becomes challenging. The paper does not address how this information preservation is ensured, especially since the output of the n-th layer of the LLM is expected to represent all input nodes. For example, to address information loss due to the non-deterministic nature of LLMs authors would keep records of node representations after each round of prompting LLM and do an overall aggregation.\", \"The generalization of the LLM module for inference might be limited to the tasks and datasets used for fine-tuning which is far from a universal vocabulary as claimed in the paper. Also, this type of multi-task learning is also studied with GNN modules trained on different tasks and datasets as proposed in [1, 2, 3]. Authors would provide evidence or arguments supporting their claim of a \\\"universal vocabulary\\\", particularly in comparison to existing multi-task learning approaches with GNNs.\", \"The term \\\"prompt-based GNNs\\\" can be misleading, as the underlying model is actually an LLM, not a GNN, and there are fundamental differences between GNN-based models and LLMs. This confusion is further compounded by the visualization in Figure 2, which portrays the current work as a combination of a GNN and an LLM, despite the absence of a GNN module in the model. To enhance clarity, it would be beneficial to revise the terminology and the visualization to better reflect the model's true nature. For example, they can call their method a \\\"prompt-based information propagation\\\" and also remove the \\\"GNN\\\" block from the figure and keep the single LLM.\", \"In the \\\"Data Description\\\" section, the paper states that the Cora, Citeseer, and PubMed datasets are \\\"introduced\\\" by this work. This wording is misleading, as these datasets are not contributions of the paper. Authors would instead use alternatives like \\\"used\\\", \\\"utilized\\\", \\\"evaluated/experiments on\\\", etc.\", \"The explanations of some components in the proposed method, particularly in the sections on \\\"Graph Vocabulary Learning\\\" and \\\"GNN Replication with LLMs,\\\" are overly detailed, which can detract from the paper's fluency. The authors should consider summarizing these sections to better highlight the main contributions. Also, the detailed explanations can be moved to an appendix or supplementary material.\", \"[1] Hao Liu, Jiarui Feng, Lecheng Kong, Ningyue Liang, Dacheng Tao, Yixin Chen, & Muhan Zhang (2024). One For All: Towards Training One Graph Model For All Classification Tasks. In The Twelfth International Conference on Learning Representations.\", \"[2] Sun, X., Cheng, H., Li, J., Liu, B., & Guan, J. (2023). All in One: Multi-Task Prompting for Graph Neural Networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 2120\\u20132131). Association for Computing Machinery.\", \"[3] Bahare Fatemi, Jonathan Halcrow, & Bryan Perozzi (2024). Talk like a Graph: Encoding Graphs for Large Language Models. In The Twelfth International Conference on Learning Representations.\"], \"questions\": [\"The discussion on generating graph vocabulary remains incomplete. Specifically, which module is responsible for creating the graph vocabulary: the graph understanding module or the graph inference module? Based on Figure 4 and the data flow depicted, it appears that the graph vocabulary is generated before the predictor LLM. However, the paper discusses this in Section 4.2, which focuses on the graph inference module. Could it be that the graph vocabulary is constructed through another process between the two modules?\", \"The distinction between a regular node textual feature and a language-based ID is unclear. In the case study, the language-based ID seems to be a rephrasing of the input textual features. How, do these language-based IDs address the OOV problem if they only replicate a rephrased version of input?\", \"The representation of node features in graphs like Cora and PubMed as text is not addressed. Given that these graphs contain high-dimensional features, creating an optimal textual representation from them is challenging. How are these features conveyed in the text domain?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your reply. But you just call your model a \\\"graph foundation model\\\" in your title. I suggest maybe just call it \\\"graph-llm\\\".\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you authors for addressing my questions and concerns.\\n\\nHowever, my main concerns about the scalability of the work remained because the method doesn't scale with large graphs even by sampling methods. Also, the generated text from a graph may exceed the eligible length LLMs allow for input. More importantly, the way LLMs do the message-passing process, based on the experimental results, is more like summarization than information propagation over the neighborhoods. Furthermore, I think authors should ensure that nodes are preserved through multiple layers of LLM up to the last layer, and it'd be helpful to show how they handle retrieving the final representation of the nodes. \\n\\nTherefore, I will keep my score as I think the paper is not ready to be published.\"}",
"{\"summary\": \"This work proposes a graph foundation model, PromptGFM. Specifically, it includes a graph understanding module where the LLM is prompted to perform 'message passing,' similar to that in GNNs. Additionally, there is a Graph Inference Module, in which each node in the graph is mapped to text tokens, ensuring expressiveness, transferability, and scalability\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It is a good idea to prompt the LLM to simulate message passing in a GNN. The design of feature transformation, message passing, and message aggregation all sound reasonable.\", \"weaknesses\": [\"I am not convinced that this qualifies as a graph foundation model. It is an interesting exploration of how to integrate LLMs and GNNs. However, from a methodological perspective, since PromptGFM prompts the LLM to simulate message passing, it requires text-only attributes and is unable to handle non-textual graphs. In the experiments, the inter-domain ability was demonstrated by transferring from Cora/Citeseer/Arxiv to PubMed. However, this is not a typical inter-domain setting, as they are all citation graphs. There might be shared patterns within citation graphs that contribute to the observed 'inter-domain' ability. I would need more experiments to be convinced of the 'inter-domain' ability.\", \"Lack of strong baseline models. Table 1 didn't include those strong baseline modesl, such as TAPE[1].\", \"[1] He, Xiaoxin, Xavier Bresson, Thomas Laurent, Adam Perold, Yann LeCun, and Bryan Hooi. \\\"Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning.\\\" arXiv preprint arXiv:2305.19523 (2023).\"], \"questions\": [\"How the node is mapping into the LLM's vocabulary (eq5)? How many tokens are need for each node?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces PromptGFM, an LLM-based approach to perform node classification and link prediction on text-attributed graphs (TAGs) only. First, PromptGFM uses one LLM to summarize textual node features (like a title and abstract of a paper) into a \\u201cLanguage-based ID\\u201d string. Second, another LLM is fine-tuned on prompts with this textual node ID and a sub-sample of IDs of its one-hop neighbors to \\u201cperform neighbor aggregation\\u201d and predict a label for node classification or node ID for link prediction.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"S1. Intra- and inter-domain transferability experiments might be of certain interest.\", \"weaknesses\": \"**W1.** The paper completely lacks theoretical and experimental support for its elaborated claims such as \\u201cmeticulously design a series of prompts to align with the GNN workflow at the finest granularity\\u201d, \\u201cfaithfully reproduce the message passing paradigm of GNNs\\u201d, \\u201cconcise yet meaningful representations\\u201d, \\u201cwe capture semantic and structural information through the prompt-based GNN\\u201d, \\u201cgenerate expressive representations\\u201d.\\n\\nThere is no formal study, proofs, or experimental analysis of how LLM prompts like \\u201cplease aggregate the following neighbors\\u201d can ever capture the math of message passing or its results. Or how \\u201cmeaningful\\u201d or \\u201cexpressive\\u201d the LLM representations are compared to GNNs (there is a whole body of literature on the theory of GNN expressiveness that might have been of help). Perhaps the biggest mistake in claiming the alignment with GNNs is the fact that GNNs are permutation-invariant models whereas all autoregressive LLMs are by default *permutation-variant*, that is, the result of \\u201cPlease aggregate <node 1> and <node 2>\\u201d is very likely be different from \\u201cPlease aggregate <node 2> and <node 1>\\u201d (at least due to positional encodings which will featurize the nodes differently). Constructing a few prompts and claiming \\u201cfaithful reproduction\\u201d without any formal study or theoretical guarantees is not enough to sell such claims.\\n\\nSimilarly, claiming in the ablations (Section 5.3) that PromptGFM suffers from over-smoothing after 4 \\u201cmessage-passing layer\\u201d prompting steps has no theoretical or experimental evidence - since there is no notion of a discrete node in PromptGFM (a node is represented with several tokens from the LLM vocab), then what is getting oversmoothed? There are rather rigorous mathematical analyses of oversmoothing [1,2] that measure the distance between node representations of different layers - it would require evidence of a similar phenomenon in scattered tokenized LLM representations to claim oversmoothing in this case. \\n\\n[1] Oono, Suzuki. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. ICLR 2020\\n[2] Southern, Di Giovanni, et al. Understanding Virtual Nodes: Oversmoothing, Oversquashing, and Node Heterogeneity\\n\\n**W2.** The paper largely oversells its technical contributions, namely, the Graph Understanding Module that replicates message passing (see **W1** why it does not, there is no evidence of such replication); and the universal graph vocabulary which works \\u201cacross all the graphs and tasks\\u201d - in fact, PromptGFM does not propose any new vocabulary for encoding graphs and just relies on the existing LLM token vocabularies and textual descriptions of nodes. If input graphs do not have textual features (a common case for non-citation graphs), PromptGFM appears to be of questionable value as a graph foundation model. The paper repeatedly claims that \\u201cexisting methods treat nodes as OOV tokens\\u201d whereas the vast majority of \\u201cLLMs for graphs\\u201d approaches (including compared OFA or GraphText) do exactly the same as PromptGFM and use textual node features as part of an LLM prompt.\\n\\n**W3**. The experimental agenda is rather underwhelming and raises a lot of questions about the practical applicability of PromptGFM. \\n\\n* Only 4 standard citation datasets for node classification (NC) and link prediction (LP);\\n* Link prediction experiments employ GNN baselines unsuited for this task - the authors are aware of the benchmark by Chen et al, 2024 which consists of 20 node/link/graph-level tasks and used much stronger baselines like BUDDY for link prediction;\\n* Comparing billion-sized components of PromptGFM (GPT 3.5 Turbo for Language Node IDs + fine-tuned T5 for actual tasks) which need several GPUs and high monetary inference costs even for small standard graph datasets like Cora vs much smaller GNNs (often 100-1000x smaller) that run for free even on CPUs presents quite a myopic and biased perspective on the advantages of LLMs for graph learning tasks;\\n* It is hard to quantify the importance of reported results when many important experimental details are missing. Are node labels in the NC task encoded via text or somehow else? Are LLMs asked to predict a text label of a node label or select one of K options? How many negative samples were used in the LP task? What is the size of T5 fine-tuned on the NC and LP tasks (there are many options)? Was it a full fine-tune or LoRA?\", \"questions\": [\"Are node labels in the NC task encoded via text or somehow else?\", \"Are LLMs asked to predict a text label of a node label or select one of K options?\", \"How many negative samples were used in the LP task for PromptGFM and GNN baselines?\", \"What is the size of T5 fine-tuned on the NC and LP tasks (there are many options)?\", \"Was it a full fine-tune or LoRA?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer p4FX (2)\", \"comment\": \"W4. Thank you for your suggestions. (1) Our work represents an attempt at LLM as GNN, where we use prompts to guide LLMs in replicating message passing for each node, aligning with the fundamental principles of GNN-based models. This approach demonstrates how LLMs can function as GNNs, fundamentally differing from existing LLM for GNN and GNN for LLM models, where GNNs and LLMs operate independently. (2) Apologies for any confusion. In Figure 2, we compare the prompt-based GNN (our method) with the embedding-based GNN (e.g., GraphSAGE) to highlight the faithful replication of the GNN workflow. It is important to note that PromptGFM is not a direct combination of a GNN and an LLM. (3) Regarding the second question, PromptGFM extends beyond pure information propagation. We argue that the essential elements of a GNN include input, neighbor sampling, aggregation-update mechanisms, multi-layer message passing, and optimization. In PromptGFM, we design specific prompts to empower LLMs to faithfully replicate each of these elements in the GNN workflow. Therefore, the term \\\"prompt-based GNN\\\" effectively summarizes our approach.\\n\\nW5. Thanks for your comments. We will replace this term and thoroughly review the entire text for precise wording.\\n\\nW6. Thanks. We will summarize these sections and move some detailed explanations to the Appendix.\\n\\nQ1. Apologies for the confusion. The construction of the graph vocabulary is detailed in Section 4.2. Following a comprehensive analysis, we directly use the final textual representations as language-based IDs. These IDs are then indexed from the graph vocabulary and integrated into task-oriented prompt templates to create comprehensive instructions. Then, we fine-tune the LLM under a multi-instruction fine-tuning framework. We will consider reorganizing this content for better clarity.\\n\\nQ2. Sorry for misunderstanding. We prompt LLM to iteratively refine textual representations, which goes beyond simple rephrasing. During the multi-round prompting process with LLMs, both semantic fusion and structural information capture occur simultaneously. Using language-based IDs, we can construct readable instructions for LLM inference, resolving modality incompatibility and facilitating positive transfer.\\n\\nQ3. Thanks for your question. Both datasets do have text attributes from other sources, as indicated in the datasets provided in the code repository.\"}",
"{\"title\": \"Response to Reviewer xm9X\", \"comment\": \"Thank you so much for your review. In the following, we clarify some of your misunderstanding.\\n\\nW1. Thank you for your thoughtful feedback. Our work introduces a novel approach to implementing GNN functionality within the natural language space using LLMs. This approach is not intended to entirely replace embedding-based GNNs but rather to offer a complementary perspective that explores new possibilities for leveraging LLMs in graph-related tasks. Consistent with our motivations, we argue that in the era of LLMs, providing formal mathematical proofs may not always be strictly necessary. Instead, our focus on practical and conceptual alignment has already demonstrated meaningful and promising results. We will try to clarify our points to avoid misunderstanding.\\n\\nW2. We would like to clarify that our work focuses specifically on building a graph foundation model for text-attributed graphs. In our approach, each node is represented by a textual description, referred to as a language-based ID, which ensures applicability and comparability across all text-attributed graphs. Unlike existing methods, we highlight that our graph vocabulary, composed of natural language tokens, is inherently sharable and transferable across different graphs, whereas approaches using out-of-vocabulary (OOV) tokens limit generalization to specific graph nodes. Furthermore, our work is fundamentally different from most existing methods and represents the first true implementation of \\\"LLM as GNN,\\\" as outlined in the introduction. For instance, OFA defines functional nodes and uses an LLM to convert textual descriptions into embeddings but does not utilize the LLM for core GNN operations such as message passing. Instead, in OFA, message passing is performed in the vector space using traditional GNN models, leading to embeddings that lack interpretability and generalizability to other graphs or tasks. Our approach directly addresses these limitations by leveraging LLMs for the entire GNN workflow, enabling both interpretability and broader applicability.\\n\\nW3. Our experimental design aligns with standard practices in the field, as demonstrated by studies such as [1][2][3], which follow widely accepted benchmark setups. Regarding resource consumption, we acknowledge that our model requires more computational resources compared to traditional GNNs. However, it delivers higher accuracy and focuses on foundation models with transferability and generalization capabilities, enabling cross-dataset and cross-task applications\\u2014advantages that traditional GNNs cannot achieve.\\nRegarding the experimental details, please check our responses to the questions Q1-Q5.\\n\\nQ1. In the graph understanding module, we only prompt the LLM to perform message passing without using labels. In fact, the performance of different downstream graph tasks depends on different aspects of the graph (node classification tasks focus on the smoothness of nodes, while link prediction tasks emphasize local structural information). Therefore, this approach ensures the outputs of the graph understanding module can generalize across both tasks, i.e. node classification and link prediction. In the graph inference module, when we need construct task-specific prompts, node labels are directly provided in the prompts for node classification tasks to avoid hallucination.\\n\\nQ2. For node classification, we directly provide the candidate labels and ensure that LLMs can be select one of them. This design is aligned with traditional node classification settings.\\n\\nQ3. As mentioned in Section 4.1, we rely on prompts \\\"Note connected nodes should share similar semantics and vice versa\\\" to intuitively optimize the textual representations because negative sampling is redundant in the situation. As for GNN baselines, the number of negative samples is set to 1 by default.\\n\\nQ4. The size of the T5 model fine-tuned on the NC and LP tasks is 0.8B. \\n\\nQ5. It was a full fine-tune. Since Flan-T5-large has relatively few parameters, we perform full-parameter fine-tuning.\"}",
"{\"title\": \"Response to authros\", \"comment\": \"Thank you for addressing my questions.\\n\\nI agree that the contribution of this work should focus on the graph foundation model for TAG rather than general graph foundation models.\\n\\nAfter incorporating TAPE as a baseline, it seems that PromptGFM only outperforms on 1 out of 4 datasets.\\n\\nOverall, I will maintain my current score, considering the overall contribution and model performance.\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}"
]
} |
|
1F8xTfv6ah | Advancing Out-of-Distribution Detection via Local Neuroplasticity | [
"Alessandro Canevaro",
"Julian Schmidt",
"Mohammad Sajad Marvi",
"Hang Yu",
"Georg Martius",
"Julian Jordan"
] | In the domain of machine learning, the assumption that training and test data share the same distribution is often violated in real-world scenarios, requiring effective out-of-distribution (OOD) detection.
This paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs).
Unlike traditional multilayer perceptrons, KANs exhibit local plasticity, allowing them to preserve learned information while adapting to new tasks.
Our method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples.
We validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques.
These results underscore the potential of KANs in enhancing the reliability of machine learning systems in diverse environments. | [
"Out-of-Distribution Detection",
"Local Neuroplasticity",
"Kolmogorov-Arnold Networks"
] | Accept (Poster) | https://openreview.net/pdf?id=1F8xTfv6ah | https://openreview.net/forum?id=1F8xTfv6ah | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rqoQt9hPFa",
"qFbBDRcPVZ",
"iHmrYpinIr",
"i8LgohWADR",
"exJIUB3P1X",
"ekornAkiHt",
"Zv6culce7a",
"ZUEa2Ubtea",
"Yo8qM977Tf",
"WhBIkzfSHP",
"Uf7Dk7jzIC",
"P3njlYMcer",
"O1DVsb3Vrn",
"GupMw0GEd0",
"Gubu6hJ7Tp",
"FLHuzSjZQd",
"3v3SJ9DLSP",
"3f6QKW7vNu"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732958810663,
1730817517204,
1732538983820,
1732538057575,
1730679475588,
1737523871719,
1732958303695,
1732642172542,
1732958677644,
1730193102775,
1732958149625,
1732538544140,
1732958449181,
1734749987445,
1732546706198,
1730665841247,
1733141270006,
1732539440397
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Reviewer_Asxo"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Reviewer_oW4J"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Reviewer_Asxo"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Reviewer_WMKe"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Area_Chair_haj7"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Reviewer_cRaW"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7878/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"We carried out further experiments on the OpenOOD ImageNet-1K (full-spectrum) benchmark.\\nOur method now holds first place on this leaderboard, outperforming the previous best by 2%. \\nDetailed results have been shared in the general comment \\\"Evaluation on ImageNet-1k\\\" for your convenience.\\n\\nWe hope you will take these additional results into account during your evaluation.\\\\\\nThank you again for your valuable feedback.\"}",
"{\"summary\": \"The paper introduces a new out-of-distribution (OOD) detection method leveraging Kolmogorov-Arnold Networks (KANs), which utilize \\u201clocal neuroplasticity\\u201d to differentiate in-distribution (InD) data from OOD data via comparing the activation patterns of a trained KAN against an untrained counterpart. KANs stand out due to their spline-based architecture, which preserve specific network regions during training, aiding in the OOD detection.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The described method is clearly defined and is easy to reproduce.\\n\\nThe method is validated across image and tabular medical data benchmarks, demonstrating improved performance and robustness compared to other state-of-the-art OOD detectors. \\n\\nThe findings highlight KANs' potential in enhancing model reliability across diverse environments by maintaining high detection accuracy, even with a relatively small training dataset.\\n\\nThe results (although not on all datasets) look promising in terms of different OOD detection accuracy especially for the case of low number of training samples.\", \"weaknesses\": \"Despite the clarity, some steps of the approach implementation look like ad-hoc tricks for improving the method\\u2019s performance without developing a deep intuition why a particular step is better than alternatives (please, see questions below for details).\\n\\nThe fact that not all datasets (leaderboards) from the OpenOOD were used for testing the approach, along with the obtained not perfect results on CIFAR-100, suggest that the datasets were selected manually. The authors need to prove absence of any selection bias. \\n\\nI am strongly concerned about the scalability of the proposed method, which requires splitting the training dataset into a number of subsets and fitting a model per a subset (see comments below).\\n\\nThe method resembles feature-preprocessor (backbone-dependent), being not applicable to the case where a good feature extractor is not known.\", \"questions\": \"Questions and suggestions:\", \"major\": \"Testing of approach on other large-scale datasets would be beneficial, consider other leaderboards from openOOD like ImageNet-200, 1K.\\nThe choice of the K-means clustering approach looks quite arbitrary for initial data splitting. Why not use other clustering approaches like DBScan, Spectral, Agglomerative or even Gausian mixture? I believe, K-means choice should be justified here.\\n\\nOne can assume a dataset with a lot of natural clusters (like ImageNet-1K) will require a lot of time for training KANs. Show that the approach is actually scalable, robust, and not computationally burdensome in case of a large number of clusters.\\n\\nThe robustness of clustering approach is not evident for the case of regression task due to the poor internal separability of data clusters. I suggest adding one example of OOD detection where the training dataset is directly related to the regression task. \\n\\nThe method looks strongly backbone dependent and may be poorly working for the plethora of practical tasks where the good backbone feature extractor is not known. Is it possible to exemplify the method robustness for the case of the absence of backbone preprocessor? \\nProbably, some classic ML tabular datasets (e.g. from sklearn) could be useful here.\\n\\n\\u201cImportantly, our experiments show that the previous methods suffer from a non-optimal InD dataset size\\u201d - this statement requires more experimental support. Currently, the method superiority was shown only for the CIFAR-10 dataset.\", \"minor\": \"Line 183 (figure caption): \\u201c- \\u201c(e) InD score S(x)\\u2200x \\u2208 [\\u22121,1] \\u201c - why the InD score can take negative values? The original formula (5) contains absolute value operation brackets. Is this the typo?\", \"line_187\": \"\\u201cA simple, yet effective approach is to split the dataset based on class labels.\\u201d - It is not obvious how to train KANs in case of such splitting. One can imagine a situation where positive class is OOD for a KAN trained on samples of negative class, and the maximization scoring procedure identifies positive class as an OOD. This point should be clarified or rephrased.\\n\\nI\\u2019m interested if the method will be robust for the case of NaN-enriched data samples? It is not a request for an additional analysis but rather an interesting point for the discussion of method limitations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer oW4J\", \"comment\": \"We sincerely appreciate the time and effort invested in providing valuable feedback on our submission. We are pleased that our contributions and thorough experiments have been acknowledged. Below, we address the specific points raised:\\n\\n**Q1**: Method structure and clarity (Eqn 5):\\\\\\n**A1**: We acknowledge the need for further clarity regarding the method structure. Intuitevely, many methods define the boundary surrounding the InD based on the training samples and classify samples at inference time based on their distance to this boundary. In our approach, the InD boundary is encapsulated within the spline trainable coefficients, while regions activated by InD samples are utilized to determine the distance from the boundary through the aggregation of the delta matrix. we clarified this in line 136-137 of the revised manuscript.\\n \\n**Q2**: Generalization performance with KANs:\\\\\\n**A2**: We agree that integrating KANs in the training scheme can influence the final performance. We have included new experiments in Appendix A.2 to illustrate that our detector can be applied directly to the data features without an additional backbone model. However, it is important to emphasize that our method functions as a post-hoc processor, applied to a pre-trained backbone. Consequently, training our detector does not impact the backbone model as it function as a separate block. The advantage of post-hoc methods is their ease of integration with different backbones without requiring additional training, even in scenarios where no feature-extractor backbone is available. \\n \\n**Q3**: Results on CIFAR-100:\\\\\\n**A3**: Although the improvements on CIFAR-100 are minimal, it is noteworthy that our method is either the best or statistically similar to the best-performing method across a wide range of benchmarks on the overall average AUROC metric. This consistency is not observed in other approaches. For instance, while MDS performs well on medical benchmarks, it shows poor performance on CIFAR-10 and CIFAR-100. Conversely KNN excels on CIFAR benchmarks but underperforms on the medical datasets. \\nTo highlight this aspect even more, we conducted additional experiments on the ImageNet-200 benchmark (Table 2 of Section 3.2) where the KAN outperforms the previous best method by approximately 4\\\\%.\\nThis highlights the robustness and versatility of our method.\\n\\n**Q4**: Computational cost:\\\\\\n**A4**: We have extended Appendix A.11 to include not only inference time but also the setup time (which includes extracting the latent features from the backbone, the partitioning method and the trainings of the KANs). We reported the results as a function of the dataset size and the number fo partitions. The results shows that the setup time of our detector scales linearly with the dataset size, in-line with other methods.\\n\\nThank you for your thoughtful comments. We have revised the manuscript in response to your suggestions, leading to significant enhancements in both quality and clarity.\"}",
"{\"title\": \"Response to Reviewer WMKe\", \"comment\": \"We thank the reviewer for their detailed feedback and constructive criticism. We appreciate the acknowledgment of the originality, performance, and exhaustive experimentation of our method. Below, we address the concerns raised in the review:\\n\\n**Q1**: No experiments demonstrate the method's scalability to larger images or real-world problems.\\\\\\n**A1**: We acknowledge the absence of large-scale experiments in our initial submission. To address this, we have conducted additional experiments on the ImageNet-200 dataset, which is part of the OpenOOD benchmarks and contains five times more images that are seven times larger compared to the CIFAR benchmark. We specifically considered the full-spectrum version of the benchmark as it makes the detection problem more challenging and closer to real-world situations by adding various covariate-shifted samples to the InD test set. As shown in Table 2 of Section 3.2 of our revised manuscript, the KAN detector ranks first, surpassing the previous best method by approximately 4%.\\n \\n**Q2**: The partitioning problem of KANs is very severe.\\\\\\n**A2**: As suggested by Reviewer Asxo, we incorporated additional experiments on regression datasets where no classes exist. We show that even when classes are not available, the partitioning method still performs well (see Appendix A.2). We hope that this experiment, along with the tests on the large-scale ImageNet-200, demonstrates that the partitioning method is an essential component of our detector and that it effectively works in various scenarios.\\n \\n**Q3**: The influence of model capacity is unclear.\\\\\\n**A3**: The model size in our case is controlled by three factors: the input size, the output size, and the grid size. The first two are generally dictated by the problem itself. Note, however, that the input to our detector is not the raw image but the latent features space of the backbone. Typically, the latent space has a size drastically smaller compared to the input image, ensuring that larger images do not compromise the scalability of our detector. The effect on performance given by the grid size, and thus the model size, is shown in Table 8 of our manuscript. We also added this explanation to the revised manuscript in Appendix A.11.\\n \\n**Q4**: Line 43 has an incorrect citation.\\\\\\n**A4**: Thank you for pointing this out. We have corrected the citation in Line 43.\\n \\n**Q5**: Clarification on how the hyperparameter search is conducted.\\\\\\n**A5**: We tried to keep the search space quite large to ensure that we capture the optimal values. For the parameters related to the KAN (i.e. grid size, learning rate, epochs) we used similar ranges to what described in the examples of the original KAN paper. We have added the ranges considered for each parameter in Appendix A.12.\\n\\nThank you once again for your valuable feedback. We have carefully addressed all the comments and made revisions based on your suggestions, which we believe have greatly enhanced the quality and clarity of our paper.\"}",
"{\"summary\": \"This paper introduces a novel OOD detection method that leverages the unique local neuroplasticity of Kolmogorov-Arnold Networks (KANs). By comparing the activation patterns of a trained KAN against its untrained counterpart, the method identifies OOD samples across diverse benchmarks, including computer vision and tabular medical data. Experimental results demonstrate that the KAN-based approach outperforms existing methods and shows resilience to variations in in-distribution dataset sizes. This robust, adaptable approach makes KANs a promising tool for enhancing the reliability of ML systems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. It introduces an innovative approach to OOD detection, offering fresh ideas and a unique viewpoint that advances the current understanding of OOD detection techniques.\\n2. The paper effectively harness the neuroplasticity characteristic of KANs, ensuring that learning new tasks only affects the network regions activated by the training data, effective motivation for OOD detection.\\n3. The paper includes thorough experiments on standard benchmarks.\", \"weaknesses\": \"1. While the core idea is clear, the method appears loosely structured. Specifically, the role of multiplying location-specific information with regions activated by InD samples to achieve the delta function (used in the score function) is unclear (e.g., Eqn 5). Additionally, no study is provided to analyze these aspects, leaving parts of the methodology unexplored.\\n2. The paper does not present or discuss the generalization performance of models when KANs are incorporated into the training scheme. \\n3. Results on CIFAR-100 indicate minimal advantage over existing methods, as the improvements in detection performance appear statistically insignificant.\\n4. Including a discussion on the computational cost of the proposed method would strengthen the paper. Given that the approach involves dividing the dataset into different groups, insights into computational efficiency would enhance understanding of the method\\u2019s practicality.\", \"questions\": \"Please answer the points raised in the questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"As you suggested, we conducted additional experiments on the OpenOOD ImageNet-1K (full-spectrum) benchmark.\\nOur method achieves first place also on this leaderboard, outperforming the previously best method by 2\\\\%. \\nWe include a detailed table of these results in a general comment for your reference.\\n\\nWe hope this addresses your remaining concerns. \\nThank you once again for your valuable feedback and for considering our additional experiments.\"}",
"{\"comment\": \"I appreciate the new experiments but the scalability to large datasets still remains an issue (time constraints indeed are harsh for the rebuttal). This appears to be a very borderline case among all reviewers because of this trait. So, I will keep my rating but will increase my 'soundness' score to acknowledge authors' explanations.\"}",
"{\"comment\": \"We performed additional tests on the OpenOOD ImageNet-1K (full-spectrum) benchmark.\\nOur method achieves first place on this leaderboard as well, exceeding the performance of the previously best method by 2%. \\nThe detailed results are available in the general comment \\\"Evaluation on ImageNet-1k\\\" for your review.\\n\\nWe kindly ask you to consider these findings in your evaluation process.\\\\\\nOnce again, we appreciated your constructive feedback.\"}",
"{\"summary\": \"The authors propose to use Kolmogorov-Arnold Networks (KAN) for out-of-distribution detection. The key advantage of KANs is their plasticity which results in avoiding catastrophic forgetting. The authors show that this property can be leveraged to detect OOD samples.\\n\\nThe method demonstrates good performance on small datasets, but the proposed method does not properly address the shortcomings of the KAN architecture, and the method was not validated in terms of scalability to realistic problems. Overall I rate weak reject.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Originality: Given that KANs are a novel type of architecture the research is a very current\", \"The method is evaluated on image and tabular data, demonstrating feasibility across different domains.\", \"Performance: The performance on the benchmarks is convincing and demonstrates superiority over a vast set of previous methods\", \"Exhaustive experimentation on toy datasets including multiple important ablations that erase questions (such as stochasticity)\"], \"weaknesses\": [\"Major:\", \"Scalability: No experiments demonstrate the method's scalability to larger images or real-world problems.\", \"Insufficient capturing of joint distribution: I believe the partitioning problem of KANs is very severe. While the problem is mentioned I believe it is not properly addressed. Essentially, by partitioning the dataset you are just scaling the problem down to subclasses. What if the l-shaped differences, that you mention in Table 2, appear on an intra-class level instead of a class level? While this may work for toy data if the data is sufficiently separable using k-means or class labels directly, I doubt it will work for more difficult problems such as MVTech.\", \"The influence of Model capacity is unclear: KANs are known for their improvements in lack of catastrophic forgetting. How does the model size influence this. Additionally, if KANs treat features individually, the difficulty of the problem and the necessary capacity of the method scales drastically with the image size.\"], \"questions\": \"Line 43 has wrong citation\\n\\nYou mention that the hyperpareter search can be quite challenging. How did you decide for the parameter space especially regarding number of epochs, learning rate, partitionings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Evaluation on ImageNet-1k\", \"comment\": \"We further evaluated our method on the challenging ImageNet-1k (full-spectrum) benchmark.\\\\\\nRemarkably, our method ranks first, outperforming the previous best approach (NAC) by approximately 2% in the overall average AUROC metric. \\nDetailed results can be found in the table below.\\n\\n| **Method** | **SSB-hard** | **NINCO** | **iNaturalist** | **Textures** | **OpenImage-O** | **Avg Near** | **Avg Far** | **Avg Overall** |\\n|------------|--------------|-----------|-----------------|--------------|-----------------|--------------|-------------|-----------------|\\n| OpenMax | 53.79 | 60.28 | 80.30 | 73.54 | 71.88 | 57.03 | 75.24 | 67.96 |\\n| ODIN | 54.22 | 60.59 | 77.43 | 76.04 | 73.40 | 57.41 | 75.62 | 68.34 |\\n| MDS | 39.22 | 52.83 | 54.06 | 86.26 | 60.75 | 46.02 | 67.02 | 58.62 |\\n| MDSEns | 37.13 | 47.80 | 53.32 | 73.39 | 53.24 | 42.47 | 59.98 | 52.98 |\\n| RMDS | 56.61 | 67.50 | 73.48 | 74.25 | 72.13 | 62.06 | 73.29 | 68.79 |\\n| Gram | 51.93 | 60.63 | 71.36 | 84.83 | 69.40 | 56.28 | 75.20 | 67.63 |\\n| ReAct | 55.34 | 64.51 | 87.93 | 81.08 | 79.34 | 59.93 | 82.78 | 73.64 |\\n| VIM | 45.88 | 59.12 | 72.22 | 93.09 | 75.01 | 52.50 | 80.10 | 69.06 |\\n| KNN | 43.78 | 59.86 | 67.79 | 90.29 | 69.98 | 51.82 | 76.02 | 66.34 |\\n| ASH | 54.66 | 66.38 | 89.23 | 89.53 | 81.47 | 60.52 | 86.75 | 76.25 |\\n| SHE | **58.15** | 64.27 | 84.71 | 87.48 | 76.92 | 61.21 | 83.04 | 74.31 |\\n| GEN | 52.95 | 62.73 | 78.47 | 71.82 | 72.62 | 57.84 | 74.31 | 67.72 |\\n| NAC | 52.48 | 66.49 | 88.92 | 92.77 | 80.76 | 59.48 | 87.48 | 76.28 |\\n| **KAN** | 55.88 | **69.55** | **91.55** | **93.45** | **82.15** | **62.71** | **89.05** | **78.52** |\\n\\nThis benchmark includes over 1.2 million samples and utilizes a larger backbone network (ResNet-50). \\nThe results further validate our method's capability to handle complex, large-scale problems.\\\\\\nWe currently cannot update our manuscript, but if given the opportunity, we will include these results in the camera-ready version of our paper.\"}",
"{\"title\": \"Response to Reviewer cRaW\", \"comment\": \"Thank you for your thorough review and constructive feedback on our submission. We appreciate your positive remarks regarding the relevance, novelty, and motivation of our work, as well as your recognition of the extensive set of experiments we conducted. Below, we address the specific questions and concerns you raised:\\n\\n**Q1**: Univariate Nature of KANs\\\\\\n**A1**: We acknowledge that KANs, being inherently univariate, might seem to limit their applicability. However, our dataset partitioning strategy\\u2014whether by class labels or clustering methods such as k-means\\u2014enables us to divide complex and correlated feature distributions into smaller ones that can be well-approximated using only the marginal feature distribution. Consequently, the KAN detector can effectively process these partitions. We have clarified this point in Appendix A.7 together with new experiments and discussions on alternative clustering techniques. As suggested by another reviewer, we also tested the partitioning method of our detector in regression-based datasets where no classes exist and showed that the partitioning method still performs well (see Appendix A.2).\\n\\n**Q2**: Scalability to Large Datasets\\\\\\n**A2**: We demonstrate that our method effectively handles large-scale datasets with a new experiment on the ImageNet-200 benchmark which contains five times more images that are seven times larger compared to the CIFAR benchmark. Here our method outperforms all previous baselines (see Table 2 of Section 3.2 for more details). We have also revised Appendix A.11 to include a discussion on the method's complexity and scalability.\\n\\n**Q3**: Recent Usage of KANs\\\\\\n**A3**: We agree that KANs are relatively new and not yet widely adopted. Our focus was on using the KAN detector as a post-hoc method. This means that it can be applied to any existing backbone (e.g., CNN or Transformer) without influencing the classification output of the backbone itself. We show that this works for small- and large-scale datasets, image and tabular data, and different backbone models.\\n\\n**Q4**: Training of Different KANs\\\\\\n**A4**: We apologize for the lack of clarity regarding the training process of different KANs. Each KAN is initialized identically, with the only difference being the data subset (partition) on which they are trained. The training task is the same for all models, and in our case, we used the same loss function as the backbone. We have clarified this point in lines 191-192 of our revised manuscript.\\n\\n**Q5**: Integration with Pre-trained Models\\\\\\n**A5**: To clarify, our approach (like other post-hoc methods) does not replace pre-trained models but rather complements them. The pre-trained model, such as ResNet-18, is used for feature extraction. These features are then processed by our KAN-based detector (or any other considered post-hoc technique) in a subsequent phase. These backbone models do not have to be based on KANs; they can follow any architecture, such as fully connected MLPs, ResNets, or Transformers.\\n\\n**Q6**: Use of Pre-trained Backbones\\\\\\n**A6**: The primary job of the backbones is to perform the classification or regression task. The OOD detector is applied afterward to detect semantically different samples (e.g., samples that do not belong to the training classes), which would yield incorrect predictions by the backbone. From the detector's perspective, the backbone's job is simply to provide the latent features. We clarified this in the revised manuscript at lines 266-268.\\n\\n**Q7**: Generalization of Hyperparameters\\\\\\n**A7**: The validation set contains both InD and OOD samples. However, the OOD samples encountered at test time are of a different type as they belong to different datasets and hence classes. \\n For instance, on the CIFAR-10 benchmark, the validation set includes only OOD samples of the \\\"near\\\" type (CIFAR-100 and TIN datasets) while the test set contains also four datasets with \\\"far\\\" OOD samples. Our method performs well on both categories indicating that the selected hyperparameters generalize well even when new OOD type are encountered.\\n\\n\\nWe greatly appreciate the insightful feedback provided. We have implemented the recommended changes and believe these revisions have substantially improved the paper's quality and clarity.\"}",
"{\"comment\": \"We have conducted additional experiments on the OpenOOD ImageNet-1K (full-spectrum) benchmark. Our method ranks first also on this leaderboard, surpassing the previously best method by 2%. You can find detailed results in a general comment we have provided.\\n\\nWe kindly ask you to consider also these new results in your evaluation.\\\\\\nThank you again for your insightful feedback.\"}",
"{\"metareview\": \"This paper presents an OOD detection method using properties of the KAN model. They compare activation patterns of a trained KAN against an untrained one and look for patterns that will demonstrate OODness based on the properties of the KAN. They show experiments on benchmark datasets demonstrating their superior OOD detection performance.\", \"strengths\": \"Interesting and novel idea, positive results on benchmarks\", \"weaknesses\": \"Limited evaluation on real, larger scale datasets, performance gains are modest\\n\\nMore evaluations on larger scale datasets would make the paper's contributions stronger and lend more credence to the detection method. The performance gains seem to reduce as the experiments move to large scale (unclear what the statistical difference is from the new table for Imagenet - 1k) - more complex/large datasets will help increase understanding of this as well. \\n\\nThe paper remains in borderline with all reviewers after the new experiments as well - this is understandable given the short rebuttal time, but provides opportunities for further improvement of the paper. \\nIf accepted, the authors will need to include the new large scale experiments with clear discussion on the computational aspects as well as include statistical estimates in the table to judge how statistically significant the 2% performance increase is.\", \"additional_comments_on_reviewer_discussion\": \"The main concern amongst reviewers was in applicability of the method beyond simple datasets. Due to this, there were no clear champions for the paper.\\nThe new experiment on ImageNet-1k is a large scale experiment that still demonstrates the performance of the method but I did not see the same format of results as in the paper (metric and variance in metrics to judge if the results are statistically significant). Reviewer Asxo judges that the authors probably rushed to submit this and the ask for more evaluations on challenging datasets, while very important, is a substantial ask in the short rebuttal period - it is unclear how the new results should be assessed.\", \"the_reviewers_raised_other_points_as_well\": [\"Choice of k-means for clustering\", \"Performance in regression tasks\", \"Computational costs\", \"Other clarifications regarding training process, influence of model capacity, amongst others\", \"I feel the authors have answered these questions in the rebuttal though most reviewers were unresponsive. Reviewer Asxo feels the paper is borderline accept after all the rebuttals.\"]}",
"{\"title\": \"Response to reviewer feedback\", \"comment\": \"We thank all the reviewers for their valuable feedback, which has significantly improved the quality of our work.\\n\\nDetailed answers and additional experiments regarding all the concerns raised by the reviewers can be found in the individual replies of each review.\\n \\nA significant concern raised by all reviewers regards the scalability of our method.\\nTo address this, we conducted an extensive experiment on the large-scale ImageNet-200 full-spectrum benchmark. \\nThis benchmark is particularly challenging as it includes five times more samples with images that are seven times larger compared to the CIFAR datasets. \\nAdditionally, the full-spectrum version increases the detection challenge by enriching the InD test set with extra covariate-shifted samples.\\n\\nOur results, illustrated in Table 2 of Section 3.2 of the revised manuscript, show that our method ranks first, surpassing the previous best approach (ASH) by approximately 4\\\\% on the overall average AUROC metric. \\nThese results demonstrate the scalability and effectiveness of our method in handling large-scale datasets and complex real-world scenarios.\\n\\nWe also expanded Appendix A.11 with a detailed discussion on the method's complexity. \\nOur analysis indicates that the most impactful factor is the dataset size, and our method exhibits a similar scaling law to other approaches. \\nThis detailed discussion provides insights into the computational efficiency and practicality of our approach, reinforcing its applicability to large-scale problems.\\n\\nWe are grateful to the reviewers for their suggestion to include this experiment, as it has enhanced the robustness and comprehensiveness of our paper.\"}",
"{\"summary\": \"Authors utilize Kolmogorov Arnold Networks (KAN) for out of\\ndistribution detection. The main idea is to leverage the fact that KAN\\nuses BSplines as non-linear functions. Feature values that appear\\nwithin InD, if they are concentrated in certain part of the feature\\nspace - which is $\\\\mathbb{R}$ in this case, will only modify certain\\nBSpline coefficients. In this scenario when a feature value that is\\ndifferent than the InD comes, the BSpline coefficients at those\\nlocations will not have been modified during training. Hence, the\\ndifference in activation between trained and untrained network will be\\nlow. Experiments with benchmark datasets and comparisons with large\\nset of alternatives are presented.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The topic is very relevant.\", \"The idea is novel and quite intuitive.\", \"The results are motivating. Even though this is not the best\", \"performing all around, it is one of the top algorithms.\", \"Authors do a great job explaining the method as well as motivating\", \"the approach.\", \"Large set of experiments.\"], \"weaknesses\": \"- The model - due to KANs - is heavily univariate. While authors do\\n dataset partitioning to alleviate the problem, I do not see how they\\n can actually do so. Unsupervised combinations of features are\\n mentioned, however, their applicability also raises questions.\\n- Partitioning the dataset requires having multiple trained models,\\n which limits the applicability of the approach for large scale\\n problems.\\n- KANs are interesting but most recent work do not use these\\n networks. This naturally limits the applicability of the approach.\", \"questions\": \"- It is not clear how different KAN$_i$'s are trained. It would be\\n good to explain this a bit more in depth.\\n- Authors state that the method can be seamlessly integrated with any\\n pre-trained model. I do not really understand this. Doesn't one need\\n to use KAN model for this?\\n- How are the pre-trained backbones used for KAN? Does one use the\\n features extracted from these networks and build classifiers and\\n regressors with KAN architecture?\\n- Authors state that hyperparameters are tuned using a validation\\n set. How much do the trained hyperparameters generalize to OOD types\\n unseen in the validation set?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Training Time\", \"comment\": \"We would like to provide additional details regarding the ImageNet-1K experiment reported in our previous comment.\\nFor this experiment, we employed class-based partitioning, resulting in 1000 clusters. \\nHowever, we reduced the number of outputs for each model from 1000 to 10 classes by randomly grouping labels together. \\nThis adjustment is motivated by the fact that with 1000 clusters, the problem tackled by each model is greatly reduced, and thus the model's capacity can also be reduced.\\n \\nAs a result, the training time per sample of our model is slightly lower than that for the ImageNet-200 benchmark: approximately 1.6ms per sample compared to 1.9ms per sample for ImageNet-200. \\nThis indicates that our method remains efficient and robust even with a large number of clusters.\\nWe hope this clarification alleviates any concerns regarding the scalability and efficiency of our approach.\"}",
"{\"title\": \"Reply to Reviewer Asxo\", \"comment\": \"We sincerely thank the reviewer for the detailed feedback and the recognition of our method's clarity and reproducibility. We greatly appreciate the constructive criticisms and suggestions, which have been instrumental in refining our work. Below, we address each of the major and minor concerns raised.\\n\\n**Q1.1/Q2**: Testing on other large-scale datasets and scalability and computational burden: \\\\\\n**A1.1/A2**: We understand your concerns regarding scalability, which were also raised by other reviewers. To address this, we included experiments on the OpenOOD ImageNet-200 leaderboard, where our method ranks first with an overall average AUROC approximately 4\\\\% higher than the previously best performing method (detailed results are available in Table 2 of Section 3.2 in the revised manuscript). Due to time constraints, we could not include all OpenOOD leaderboards and other suggested benchmarks. However, we believe that the ImageNet-200, with five times more samples than the CIFAR leaderboards and images seven times larger, effectively demonstrates our method's scalability. To further eliminate any selection bias, we opted for the full-spectrum version of the ImageNet-200 benchmark, which includes covariate-shifted InD samples, making the detection challenge more complex and closer to real-world applications. We also expanded Appendix A.11 with a detailed discussion on the method's complexity showing that the most impactful factor is the dataset size and that our method has a similar scaling law to other approaches such as KNN.\\n\\n**Q1.2**: Choice of K-means clustering:\\\\\\n**A1.2**: The choice of K-means was motivated by its simplicity and low computational overhead. Based on this review, we conducted additional experiments testing several alternative clustering methods and found that the choice of clustering method does not significantly affect detection performance. These results are now included in the revised manuscript in Appendix A.7.\\n\\n**Q3**: Regression task example:\\\\\\n**A3**: To demonstrate that our method performs well on regression tasks, we tested it on the California Housing and Wine Quality datasets and showed that our method outperfoms the KNN detector on both of them. To further validate that the partitioning method is a core component of our detector and not merely a performance-improvement trick, we also used the Friedman synthetic dataset. The results show that the partitioning method is effective even for regression tasks with complexly correlated input features. These experiments have been added to Appendix A.2 in the revised manuscript.\\n\\n**Q4**: Backbone dependency:\\\\\\n**A4**: As suggested, we performed OOD detection on datasets without using a backbone or any other feature-extraction method. We used the same three datasets mentioned in the above answer A3 (i.e., California Housing, Wine Quality, and Friedman synthetic dataset). Our method showed superior performance compared to the KNN baseline on all three datasets, proving its applicability even in the absence of a backbone. In Appendix A.2, we also clarified that our method does not require any additional information from the backbone other than the features, unlike NAC, which requires the gradient of the backbone network.\\n\\n**Q5**: Support for the statement on InD dataset size:\\\\\\n**A5**: To better support our claim, we repeated the same experiment on the CIFAR-100 benchmark, and the results show a similar conclusion (see Table 6 in the revised manuscript).\\n\\n**Q6**: InD score negative values (line 183):\\\\\\n**A6**: It is correct that the InD score cannot be negative. Here the range [-1, 1] here refers to the support of the input space \\\\( x \\\\).\\n\\n**Q7**: Clarification on dataset splitting (line 187):\\\\\\n**A7**: When a positive class is OOD for a KAN trained on samples of a negative class, the InD score will be low for that KAN. If another KAN is trained on the positive class, the maximization procedure will flag this sample as InD, as this second KAN will return a high InD score. If the negative class is actually OOD, none of the KANs will return a high InD score, and the maximization procedure will correctly flag the sample as OOD. We clarified this relationship with the InD score at lines 196-197 in the revised manuscript.\\n\\n**Q8**: Robustness for NaN-Enriched Data:\\\\\\n**A8**: We thank the reviewer for raising this interesting point. With modifications to the KAN grid, it should be possible to handle NaN values. We hypothesize that assigning an individual spline coefficient to handle NaN values should suffice. However, since there is no distance relation between NaN and other spline coefficients, the smoothing operations of splines around NaN will be affected.\\n\\nWe sincerely appreciate your detailed feedback. Your suggestions have been very valuable in refining our paper, and we believe that the added experiments and improvements in clarity enhanced the overall contribution of our work.\"}"
]
} |
1ExfUpmIW4 | Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs | [
"Sungmin Cha",
"Sungjun Cho",
"Dasol Hwang",
"Moontae Lee"
] | Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora. However, this poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods that remove sensitive data without retraining from scratch. While Gradient Ascent (GA) is commonly used to unlearn by reducing the likelihood of generating unwanted content, it leads to unstable optimization and catastrophic forgetting of retrained knowledge. We find that combining GA with low-rank adaptation results in poor trade-offs between computational cost and generative performance. To address these challenges, we propose Low-rank Knowledge Unlearning (LoKU), a novel framework that enables robust and efficient unlearning for LLMs. First, we introduce Inverted Hinge Loss, which suppresses unwanted tokens while maintaining fluency by boosting the probability of the next most likely token. Second, we develop a data-adaptive initialization for LoRA adapters via low-rank approximation weighted with relative Fisher information, thereby focusing updates on parameters critical for removing targeted knowledge. Experiments on the Training Data Extraction Challenge dataset using GPT-Neo models as well as on the TOFU benchmark with Phi-1.5B and Llama2-7B models demonstrate that our approach effectively removes sensitive information while maintaining reasoning and generative capabilities with minimal impact. Our implementation can be found in https://github.com/csm9493/efficient-llm-unlearning. | [
"Machine Unlearning",
"Large Language Models",
"Low-rank Adaptation"
] | Accept (Poster) | https://openreview.net/pdf?id=1ExfUpmIW4 | https://openreview.net/forum?id=1ExfUpmIW4 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"x3kDuaTzon",
"se9Pron19k",
"sA3F4IbLhh",
"oBRweih2a8",
"nvXQoF081s",
"kS0JAfoKzx",
"k7ylj2C9YW",
"eCnTjGHU2K",
"dVx7lhowPX",
"cLK3HPIV0W",
"YtFpO5Wfy7",
"XHlknXIxWl",
"WeJzVM639m",
"TnKxkrdA1g",
"OuRWadtu78",
"MqYPzWiOKJ",
"M5aXTAQNb4",
"KrUTpBFCFK",
"GAbnxJUnP4",
"FK3NWPDtGm",
"BjD8wa8tzF",
"9BYbFHitLV",
"7AbGjZNoG7",
"76LwY1mLIS",
"5QU17UdwLS",
"25xmVVpLwN",
"1zpbMtsGWy",
"1qBerNkUlV"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"meta_review"
],
"note_created": [
1732056370380,
1732805175460,
1732780296285,
1732671490626,
1732056464206,
1732596011257,
1732519221983,
1732670459504,
1732434121970,
1729503275023,
1732282745615,
1732678611285,
1732056291692,
1732056313206,
1732418475277,
1730447764056,
1732499650718,
1732056407608,
1732055916136,
1730033374383,
1732056130881,
1732056227273,
1729495701045,
1737524097603,
1732500509865,
1732780351022,
1732349614871,
1734574704247
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_EhHm"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_T4eT"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_AAXs"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_sxtU"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_sxtU"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_T4eT"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_EhHm"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_EhHm"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_T4eT"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_AAXs"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11013/Reviewer_sxtU"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11013/Area_Chair_PgFd"
]
],
"structured_content_str": [
"{\"comment\": \"> [W1] In Introduction Section (Line 51-53), you mention GA and its shortcomings, I think a better way of writing here would be providing a brief overview of 2-3 other key knowledge unlearning approaches beyond GA, and summarize 1-2 common shortcomings across these methods that motivate the proposed approach. GA should be only one of those existing typical methods.\\n\\n→ We thank the reviewer for the suggestion. We will revise the introduction section to share other representative approaches such as KL- and preference optimization-based methods for unlearning, then proceed to motivate our approaches.\\n\\n---\\n> [W2] In Introduction Section (Line 76), you mention the application of LoRA to LLM unlearning remains unexplored, however, there are some existing studies using LoRA for LLM unlearning. It would be better to briefly summarize how LoRA was used for unlearning in these two papers, and then explain how their proposed approach differs or improves upon these methods.\\n\\n→ Thank you for sharing additional literature. Upon review, we have found that the first paper proposes orthogonal low-rank adaptation tailored for continual unlearning scenarios [A]. While using LoRA in the experiments, the second paper contributes mainly on testing a new datasets and evaluation metric [B]. In contrast, our work specifically focuses on the stability vs. plasticity trade-off within LoRA-based unlearning, and proposes a novel unlearning loss function and LoRA-initialization towards optimizing this trade-off. We will revise the introduction and related work sections to refer suggested papers.\\n\\n---\\n> [W3] In Introduction Section, a lack of clear summarization of contributions in the paper, making readers difficult to capture the important points of the study. In Related Work Section, a brief comparison between your proposed method and other relevant studies should be presented to better emphasize the advantages of your work.\\n\\n→ We will revise the related work section to add comparisons of previous work vs. our proposed methods.\\n\\n---\\n> [W4] In Section 3.3 (Line 223-227), you should provide a more detailed explanation of how you arrived at these hypotheses. There is still a gap between GA motivation and its weaknesses. To make the illustration more convincible here, a better way would be providing a specific example or mathematical derivation showing how the GA loss function leads to one of the stated problems (e.g., unbounded optimization or inefficient gradient updates).\\n\\n→ Thank you for your insightful comments. Here we provide additional clarification regarding the derivative (Line 218) and its relation to the three issues of GA.\\n\\nGiven the prefix $x\\\\_{<t}$, GA reduces the prediction score of the true token $x_t$ by an amount proportional to $1 - p_\\\\theta(x_t | x_{<t})$ and increases the scores of other tokens $v \\\\neq x_t$ by $p_\\\\theta(v | x_{<t})$. This process effectively shifts the prediction given $x_{<t}$ away from the true token $x_t$, achieving unlearning. Based on this analysis, we share three key issues of GA: \\n1. **Gradient Spread:** GA reduces the true token's score while increasing scores of all other tokens. When using large vocabulary sizes as in most LLMs, this leads to gradients primarily boosting tokens other than the true token $x_t$. \\n2. **Unbounded Loss:** With LLMs, GA reduces $\\\\log(p_\\\\theta(x_t | x_{<t}))$ by maximizing the cross-entropy loss, which by nature of entropy, is an unbounded optimization problem. This implies the possibility of divergence as unlearning progresses.\\n3. **Performance Degradation:** Each sequence $x$ in the forget set $\\\\mathcal{D}\\\\_f$ can require different number of model updates for successful unlearning, yet GA applies the same gradient updates to decrease $p\\\\_\\\\theta(x_t | x_{<t})$ at every iteration regardless of this distinction. This leads to unnecessary model updates, which we found to induce catastrophic forgetting of knowledge and generative performance.\\n\\nOur novel objective function IHL is specifically designed to mitigate these limitations (Lines 244-252). We will further clarify this discussion in the revised manuscript. \\n\\n---\\n> [W5] In Section 3.4 (Line 265), a basic and brief description of Fisher Information is necessary here for better understanding of the reason you employ it to address the importance quantification you mentioned before.\\n\\n→ Mathematically, the Fisher Information equals the variance of the partial derivative of the log likelihood with respect to the model parameter (shown in Equation 2). Intuitively, the Fisher Information can be considered a measurement on *how much the LLM output changes following a small change on the model weights*. We will revise the draft for better understanding of our choice in measuring weight importances.\"}",
"{\"comment\": \"Thanks for the effort to address my concerns. I would like to raise the score.\"}",
"{\"title\": \"Follow-up Response to Reviewer EhHm (1/2)\", \"comment\": \"We sincerely thank Reviewer EhHm for the opportunity to address additional concerns. Our follow-up responses are as below, and we have revised our manuscript to clarify these points as well.\\n\\n---\\n> [W2] In Figure 3, it is evident that GD + FILA demonstrates lower accuracy and higher perplexity compared to GD. Similarly, in Figure 5, GD + FILA deviates further from the target (Retain set only) relative to GD. Does this suggest that FILA is ineffective?\\n\\n→ We agree with the reviewer that FILA itself is not effective when combined with GD (which uses GA on $\\\\mathcal{D}_f$ as its unlearning signal). We would like to reiterate that we do not claim FILA to be effective on its own with GD, but our main methodological contribution instead lies in **the synergy between the stability of IHL and efficiency of FILA for LoRA-based unlearning (Lines 258-264)**. \\n\\nAs to why GD + FILA significantly underperforms, this is expected when understanding that FILA's effective role is to initialize adapter weights towards accelerating LoRA-based tuning on $\\\\mathcal{D}_f$ (Lines 264-266). Whether FILA leads to beneficial results in downstream performance depends on which loss function we use. **When using GA loss which can naturally diverge (Lines 220-224), applying FILA makes unlearning diverge even faster, significantly breaking the LLM**. It is only when using with IHL that accelerating unlearning via FILA leads to better results, enjoying the high knowledge retention of IHL alongside the tuning efficiency of FILA. We have revised the manuscript in Lines 419-427 to clarify this point.\\n\\n---\\n> [W3] I understand that the models were chosen for comparison with benchmarks, but I believe it would be more insightful to demonstrate their effectiveness on newer models.\\n\\n→ We appreciate the reviewer's suggestion to experiment with newer models, as they can offer additional insights and strenghen our findings from this work.\\n\\nIn our discussion with Reviewer T4eT, we identified that OPT [A], an open-source LLM family (including models larger than 7B trained on the Pile corpus), could serve as another suitable testbed for our TDEC experiments. Similarly, for TOFU experiments, we could explore more recent models such as Mistral [B] and Llama3 [C]. Note that implementing these would require first full-parameter tuning on the TOFU dataset to prepare the base models.\\n\\nWhile we would like to conduct and share the results of these additional experiments during the rebuttal period, these setups exceed the time and compute resources currently available to us. Therefore, we plan to explore these directions as future work and kindly ask for the reviewer's understanding of our current constraints.\\n\\nThe experiments included in this work were designed to align with the primary objectives of the study and provide a strong proof of concept, offering a solid foundation for further exploration. We sincerely appreciate the reviewer\\u2019s suggestions and will incorporate newer models in future iterations to assess the scalability and broader applicability of our method.\\n\\n[A] [Zhang et al., OPT: Open Pre-trained Transformer Language Models. arXiv 2022.](https://arxiv.org/abs/2205.01068)\\\\\\n[B] [Jiang et al., Mistral 7B. arXiv 2023.](https://arxiv.org/abs/2310.06825)\\\\\\n[C] [Meta AI, The Llama 3 Herd of Models. arXiv 2024.](https://arxiv.org/abs/2407.21783)\"}",
"{\"comment\": \"Thank you for your efforts to address my concerns. I'm still a little bit unsure about the scaling effect, the current explanation is not strong and with not enough data points. I will maintain my score (and it is already tend to accept), but thanks for your response!\"}",
"{\"title\": \"Common Response to All Reviewers\", \"comment\": \"We thank all reviewers for your time and commitment made into reviewing our work. We are deeply encouraged by the overall positive feedback on our valuable motivation towards stable and cost-efficient LLM unlearning [EhHm, T4eT, sxtU, AAXs], strong theoretical foundation [EhHm, T4eT, sxtU], and comprehensive experimentation with promising results [EhHm, T4eT, AAXs].\\n\\nIn light of reviewer-specific questions, our responses and clarifications can be found below in each respective comment. We are in the process of revising our manuscript to reflect the reviewers' comments, and will upload the revised version with a reminder soon. Should any additional questions arise, please share with us and we would be happy to discuss further. \\n\\nThank you again for your service.\\n\\nSincerely,\\n\\nAuthors of Submission11013.\"}",
"{\"title\": \"Gentle Reminder for Reviewer EhHm\", \"comment\": \"Thank you again for your insightful comments on our work. We are writing to kindly remind the reviewer that we have shared a rebuttal and also revised our manuscript to address your concerns and questions.\\n\\nPlease take the time to further review our work, and in case of any remaining concerns or clarifications, we would be grateful if you could share them with us. Your feedback has been invaluable in improving our work, and we look forward to your continued guidance.\\n\\nSincerely,\\nAuthors of Submission11013.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your effort to address my questions. Your detailed revisions and responses have resolved most of my doubts. I have raised my score.\"}",
"{\"comment\": \"Thank you again for your insightful comments on our work. We are writing to kindly remind the reviewer that we have shared a response to your additional question Q2 above.\\n\\nAs the revision deadline is approaching soon, we would greatly appreciate if you could take the time to review our response, and share any additional feedback or concerns at your earliest convenience. Your insights have been invaluable in refining our work, and we want to ensure we incorporate any suggestions you may have before the deadline. We look forward to your continued guidance.\\n\\nSincerely,\\n\\nAuthors of Submission11013.\", \"title\": \"Gentle Reminder for Reviewer T4eT\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your rebuttal. I decide to maintain my score.\"}",
"{\"summary\": \"The authors identify the limitations of current unlearning methods (e.g., Gradient Ascent (GA)), which can lead to unstable optimization and catastrophic forgetting of retrained knowledge. To overcome these challenges, the paper introduces two novel techniques for robust and efficient unlearning:\\n\\n1. **Inverted Hinge loss (HIL):** This new loss function suppresses unwanted tokens while maintaining fluency by boosting the probability of the next most likely token.\\n\\n2. **Fisher-Initialization of Low-rank Adapters (FILA):** Developed through low-rank approximation weighted with relative Fisher information, this method focuses updates on parameters critical for removing targeted knowledge.\\n\\nThe paper demonstrates the effectiveness of these techniques through experiments on the Training Data Extraction Challenge dataset using GPT-Neo models and on the TOFU benchmark with Phi-1.5B and Llama2-7B models. The proposed approach successfully removes sensitive information while maintaining the reasoning and generative capabilities of the models with minimal impact on performance.\\n\\nIn summary, this paper provides innovative solutions to the drawback of GA and demonstrates the effectiveness of the solutions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"### Originality\", \"This paper points out the shortcomings of Gradient Ascent (GA) by analyzing its inverse.\", \"This paper proposes two strategies to improve these shortcomings.\", \"This paper demonstrates the effectiveness of their improvements on two datasets.\", \"### Clarity\", \"The structure of this paper is clear, and most of the content is explained clearly.\", \"### Significance\", \"This paper provides insights into knowledge unlearning through the analysis of Gradient Ascent (GA).\"], \"weaknesses\": [\"This paper lacks the state-of-the-art knowledge unlearning baselines (such as [1][2]). Although the main goal of the paper is to address the shortcomings of GA, incorporating the state-of-the-art knowledge unlearning for comparison would make it more convincing.\", \"Some descriptions are not clear enough. For example, lines 221-223 should include more explanation for the reasons. The authors should explain in detail why GA increases the prediction score for all other tokens $v \\\\neq x_t$ in the vocabulary.\", \"From the experimental results, when only IHL is used, the performance is worse than the original GA. Does this contradict the paper's claim that IHL is designed to address the shortcomings of GA and the analysis of derivatives of GA?\", \"The paper devotes too much content to the background of knowledge unlearning in the Abstract and in the first paragraph of the Introduction. Since knowledge unlearning is a common problem, I believe it is unnecessary to describe it in such detail. The main content should focus on describing the research conducted in this paper. Specifically, Figure 1 should illustrate the proposed approach rather than the knowledge unlearning problem.\", \"[1] Zhang R, Lin L, Bai Y, et al. Negative preference optimization: From catastrophic collapse to effective unlearning[J]. arXiv preprint arXiv:2404.05868, 2024.\", \"[2] Gao C, Wang L, Weng C, et al. Practical unlearning for large language models[J]. arXiv preprint arXiv:2407.10223, 2024.\"], \"questions\": \"What is the reason for deriving the unlearning mechanism of GA from the formulas in lines 217-220?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response and a new question\", \"comment\": \"Thank you for your response! I have read your reply and it addresses some of my concerns, which is greatly appreciated! I have a new question: The majority of the experiments in the paper are conducted on models smaller than 3B parameters. Could you provide results for models with 7B parameters or larger, or at least illustrate the trend of performance as the model size scales up?\"}",
"{\"comment\": \"W1: OK\", \"w2\": \"Apologies for the confusion earlier; the missing legend caused some misunderstanding. Allow me to clarify my question based on the updated version. In Figure 3, it is evident that GD + FILA demonstrates lower accuracy and higher perplexity compared to GD. Similarly, in Figure 5, GD + FILA deviates further from the target (Retain set only) relative to GD. Does this suggest that FILA is ineffective?\", \"w3\": \"I understand that the models were chosen for comparison with benchmarks, but I believe it would be more insightful to demonstrate their effectiveness on newer models.\\n\\nW4. For \\u201cLoRA + FILA only\\u201d, I am referring to the loss function GA. In Table 1, the results of \\u201cGD + LoRA initialized with FILA\\u201d are also missing. The settings are to demonstrate the effectiveness of using FILA alone.\"}",
"{\"comment\": \"> [W1] This paper lacks the state-of-the-art knowledge unlearning baselines [A, B]. Incorporating the state-of-the-art knowledge unlearning for comparison would make it more convincing.\\n\\n→ We thank the reviewer for sharing additional baselines. \\n\\nRegarding the first paper, while Negative Preference Optimization (NPO) alleviates the divergence speed of GA from linear to logarithmic, its backbone unlearning objective still hinges on minimizing the likelihood of $\\\\mathcal{D}_f$ (or maximizing the cross-entropy), and hence NPO still suffers from its divergent behavior. As a result, Figure 5 in [A] shows NPO still leads to suboptimal knowledge retention, as most unlearning trajectories with NPO lead to decreases in model utility of more than 0.1. In comparison, our analogous results in our Figure 5(b) show negligible deterioration in model utility, outperforming NPO on the TOFU benchmark with Llama-2-7b. As [A] has publicly available code, we will revise the draft and add experimental comparisons against NPO.\\n\\nFor the second paper, we are not able to find publicly available code and the paper proposes an unlearning method tailored specifically for the continual unlearning scenario where multiple unlearning requests are made [B]. Because of this difference in experimental setup, we are currently unable to make direct empirical comparisons with our results. For now, we will revise the paper to mention [B] in our related works section.\\n\\nOn a sidenote, please understand that both papers are contemporaneous work, [A] being published in COLM 2024 held this October and [B] an arXiv paper posted this July.\\n\\n---\\n> [W2] Some descriptions are not clear enough. The authors should explain in detail why GA increases the prediction score for all other tokens in the vocabulary.\\n\\n→ The increase on all other tokens' scores in GA is due to the cross-entropy loss formulation used in all next-token prediction frameworks. Given a logit value $v_{x}$ for each possible token $x$ in vocabulary set $\\\\mathcal{V}$, the probability of generating the token $x_t$ given the prefix $x_{<t}$ is given by\\n$$\\np(x_t | x_{<t}) = \\\\dfrac{\\\\exp(v_{x_t})}{\\\\sum_{x \\\\in \\\\mathcal{V}} \\\\exp(v_x)}\\n$$ \\nNotice there are two ways to decrease $p(x_t | x_{<t})$:\\n1. We can decrease logit value $v_{x_t}$ correponding to the unwanted token $x_t$.\\n2. We can increase the logit value $v_x$ for any possible token $x$ other than $x_t$.\\n\\nIn GA, gradients flow both ways, which leads to equally increasing the logit values of all other possible tokens, and hence the problems discussed in Lines 223-227. We will revise the draft for better clarity.\\n\\n---\\n> [W3] From the experimental results, when only IHL is used, the performance is worse than the original GA. Does this contradict the paper's claim that IHL is designed to address the shortcomings of GA and the analysis of derivatives of GA?\\n\\n→ We strongly assert that, the merit of IHL mainly lies in its superior stability in retaining knowledge and generative performance, and that our experimental results are indicative of this strength. \\n\\nIn our TDEC experiments (\\u00a74.1), we observe that IHL achieves superior stability (i.e., overcoming catastrophic forgetting during unlearning) compared to GA in most cases. Specifically, when comparing the results in Table 1 between LoRA (using GD) and LoRA+IHL (where GA in GD is replaced by IHL), we find that LoRA+IHL consistently outperforms LoRA in Reasoning, Dialogue, and Pile. Additionally in Figure 3, when comparing the results from GD (blue color) and IHL (orange color), we find that, except for certain cases with GPT-Neo-1.3B (e.g., rank = 32 for Dialogue), IHL outperforms GD in almost all ranks for Reasoning, Dialogue, and the Pile. \\n\\nAlso in our TOFU experiments (\\u00a74.3), Figure 5 shows that IHL (green color, replacing GA with IHL) consistently shows negligible decrease in model utility, whereas GD (orange color, using GA) quickly loses its previously acquired knowledge, deviating significantly from the trajectory towards the Retain Set Only oracle (marked as $\\\\star$). Based on these experimental results, we hope the reviewer reconsiders the empirical superiority of IHL over GA.\\n\\n---\\n> [W4] The paper devotes too much content to the background of knowledge unlearning in the Abstract and in the first paragraph of the Introduction. Figure 1 should illustrate the proposed approach rather than the knowledge unlearning problem.\\n\\n→ We appreciate the reviewers suggestion. We will revise the abstract and introduction section to reduce the content on general knowledge unlearning, and also replace Figure 1 with a figure that specifically illustrates our proposed methods IHL and FILA.\"}",
"{\"comment\": \"> [Q1] What is the reason for deriving the unlearning mechanism of GA from the formulas in lines 217-220?\\n\\n→ In [C], Gradient Ascent (GA) was demonstrated to successfully unlearn knowledge from large language models (LLMs). Based on these results, GA has become the foundational algorithm in LLM unlearning, often combined with various regularization terms added to maintain general knowledge outside the forget set (See Section 2 of the manuscript). However, the analysis of how GA performs unlearning and the potential issues with this approach have not been adequately addressed. To highlight these concerns, we analyze the derivative of GA, as described in Lines 218-227, to expose the limitations and underlying mechanisms of GA's unlearning process.\\n\\nIn addition to the explanation in Lines 221-227, we clarify how GA performs unlearning during gradient descent: Given the prefix $x_{<t}$, GA reduces the prediction score of the true token $x_t$ by an amount proportional to $1 - p_\\\\theta(x_t | x_{<t})$ and increases the scores of other tokens $v \\\\neq x_t$ by $p_\\\\theta(v | x_{<t})$. This process shifts the predicted token for $x_{<t}$ away from the true token, achieving unlearning. \\n\\nBased on this analysis, we share three key issues of GA as noted in our manuscript: \\n1. **Gradient Spread:** GA reduces the true token's score while increasing scores of all other tokens. When using large vocabulary sizes as in most LLMs, this leads to gradients primarily boosting tokens other than the true token $x_t$. \\n2. **Unbounded Loss:** With LLMs, GA reduces $\\\\log(p_\\\\theta(x_t | x_{<t}))$ by maximizing the cross-entropy loss, which by nature of entropy, is an unbounded optimization problem. This implies the possibility of divergence as unlearning progresses.\\n3. **Performance Degradation:** Each sequence $x$ in the forget set $\\\\mathcal{D}\\\\_f$ can require different number of model updates for successful unlearning, yet GA applies the same gradient updates to decrease $p_\\\\theta(x_t | x_{<t})$ at every iteration regardless of this distinction. This leads to unnecessary model updates, which we found to induce catastrophic forgetting of knowledge and generative performance.\\n\\nOur novel objective function IHL is specifically designed to mitigate these limitations (Lines 244-252). We will further clarify this discussion in the revised manuscript. \\n\\n[A] [Zhang et al., Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. arXiv 2024.](https://arxiv.org/abs/2404.05868)\\n\\n[B] [Gao et al., Practical Unlearning for Large Language Models. arXiv 2024.](https://arxiv.org/abs/2407.10223)\\n\\n[C] [Jang et al., Knowledge unlearning for mitigating privacy risks in language models. ACL 2023.](https://arxiv.org/abs/2210.01504)\"}",
"{\"title\": \"Response to New Question\", \"comment\": \"We sincerely thank Reviewer T4eT for the prompt response to our rebuttal. Below is our response to your new question.\\n\\n---\\n> [Q2] The majority of the experiments in the paper are conducted on models smaller than 3B parameters. Could you provide results for models with 7B parameters or larger, or at least illustrate the trend of performance as the model size scales up?\\n\\n→ Unfortunately, we are unable to produce results from LLMs larger than 7B at the moment due to limited time and compute resources. Though limited in scale, we can still deduce several insights with respect to the model size, details on which are shared below following our responses on use of larger models.\\n\\n1. Scaling TDEC experiments\\n - Please note that for TDEC, we are limited to models publicly known to be pretrained on the Pile dataset [A], as the TDEC dataset contains sequences extracted from the Pile.\\n - Unfortunately, the GPT-Neo model family [B] chosen because of this reason following previous work on GA [C] only scales up to 2.7B. Even though the GPT-NeoX-20B model [D] was developed along the same line of work, this model differs from GPT-Neo in ways other than its model size (e.g., different tokenizer), and thus we are not able to make conclusive observations solely based on the model size.\\n - To the best of our knowledge, OPT [E] is the only open-source family of LLMs pretrained on the Pile with model sizes spanning beyond 7B. However, due to the large time cost required for TDEC evaluation, we are unable to run full set of experiments with either within the rebuttal period. We will add these experiments as future work.\\n\\n2. Scaling TOFU experiments\\n - For experiments on the TOFU benchmark [F], recall that we consider the LLM tuned on the TOFU dataset via full-parameter finetuning as our base model. These base models are publicly available only for Phi-1.5B and Llama2-7B via the TOFU repository, results from which we share in Section 4.3.\\n - Therefore, to scale our experiments up to larger models such as Llama2-13B or Llama2-70B, we need to prepare additional base models by full-parameter-finetuning the respective models on TOFU. We found this is not possible under our limited GPU compute.\\n - While we could more efficiently prepare base models via (1) parameter-efficient fine-tuning (e.g. LoRA) or (2) quantization-aware training, both largely deviate from the setup used in the TOFU benchmark, and are likely to introduce confounding factors preventing us to make accurate observations.\\n - Due to this reason, we are unable to extend our TOFU experiments to Llama2-13B and 70B at the moment, but would like to add as future work when given access to larger compute resources.\\n\\n3. Insights on model size from current results.\\n - In Table 1, comparing the Reasoning and Dialogue performances across GPT-Neo models of increasing size shows that **preserving language capabilities under LoRA-based unlearning becomes more challenging as the model becomes larger**. For instance, the loss in Reasoning Accuracy of GD worsens from -2.6 to -4.8, then to -6.4 as we increase the model size from 125M, 1.3B, and 2.7B. \\n - We believe this trend is due to **larger models being more likely to memorize pretraining data than smaller models** as demonstrated in previous work [G], and tuning the model to fully forget $\\\\mathcal{D}\\\\_f$ requires more weight perturbations under LoRA tuning, hence the greater loss in previously acquired knowledge. This is also reflected in GPT-Neo-1.3B and 2.7B requiring a larger number of unlearning epochs than GPT-Neo-125M.\\n - Though not exactly comparable, the TOFU results from Phi-1.5B and Llama2-7B (Figure 5) also shows this behavior of larger models. When comparing the unlearning trajectories of GD using the two models, we see that GD increases the forget quality significantly faster in Phi-1.5B than in Llama2-7B. \\n - Despite this difficulty, our IHL+FILA method best minimizes the loss in model utility consistently across all models, which we expect to be the case with LLMs larger than 7B as well.\\n\\n[A] [Gao et al., The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv 2020.](https://arxiv.org/abs/2101.00027)\\\\\\n[B] [Black et al., GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow.](https://github.com/EleutherAI/gpt-neo)\\\\\\n[C] [Jang et al., Knowledge unlearning for mitigating privacy risks in language models. ACL 2023.](https://arxiv.org/abs/2210.01504)\\\\\\n[D] [Black et al., GPT-NeoX-20B: An Open-Source Autoregressive Language Model. Workshop at ACL 2022.](https://arxiv.org/abs/2204.06745)\\\\\\n[E] [Zhang et al., OPT: Open Pre-trained Transformer Language Models. arXiv 2022.](https://arxiv.org/abs/2205.01068)\\\\\\n[F] [Maini et al., TOFU: A Task of Fictitious Unlearning for LLMs. arXiv 2024.](https://arxiv.org/abs/2401.06121)\\\\\\n[G] [Carlini et al., Quantifying Memorization Across Neural Language Models. ICLR 2023.](https://arxiv.org/abs/2202.07646)\"}",
"{\"summary\": \"The paper proposes a framework to remove sensitive information from LLMs without retraining them from scratch. Recognizing the limitations of common unlearning methods like Gradient Ascent (GA), which risks instability and unintended forgetting, the authors introduce two new techniques. The Inverted Hinge Loss (IHL) method enhances stability by suppressing unwanted tokens with the next most likely alternative, while the Fisher-weighted Initialization of Low-rank Adapters (FILA) uses Fisher information to initialize LoRA adapters, selectively targeting parameters associated with unwanted information to optimize unlearning. This dual approach was evaluated on the Training Data Extraction Challenge and TOFU benchmark with models such as GPT-Neo, Phi-1.5B, and Llama2-7B, achieving efficient unlearning with minimal loss to the model\\u2019s reasoning and generative capabilities, and demonstrating improved performance over existing methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The motivation is clearly explained.\\n2. Extensive experiments have been conducted to prove the effectiveness of the proposed methods. \\n3. The theoretical analysis strengthens the rationale of the proposed methods.\", \"weaknesses\": \"1. One of the most significant contributions of this paper is the proposal of Inverse Hard Loss (IHL), which claims to increase the probability of the second-best token only. However, it is not clear why IHL does not affect the probability of other tokens. Based on the definition of IHL in Lines 233, the probability of all other tokens is impacted. As such, IHL can only address problem 1 (Line 224) but cannot address problems 2 and 3 of GA (Lines 224 ~ 226).\\n2. In Figures 3 and 5, the unlearning performance of employing only IHL (represented in green) does not outperform the GD baseline (depicted in blue), which undermines the effectiveness of IHL. \\n3. The main results only use GPT-neo models, which are old models. It is better to use more recent models like Llama and Mistral models to make it more practically useful. It is also inconsistent to use different models for main results and analysis. \\n4. There are no ablations studies for the following settings: 1) full parameter fine-tuning with IHL; 2) LoRA + FILA only; 3) GD + LoRA + FILA.\", \"questions\": \"1. In Figure 3, why are some data points missing?\\n2. It is better to add legends in Figures 3 and 4 to improve the clarity.\\n3. It is better to define \\u201cModel Utility\\u201d within the paper instead of referring readers to other papers.\\n4. For the Hinge loss equation in Line 233, since the probability p(.) is in the range of (0,1), the second item within max() function is always larger than 0, right? If so, IHL is to reduce the probability of true tokens but to increase the probability of other tokens, right?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your response. We greatly appreciate the opportunity to address your concerns in our rebuttal.\\n\\nIf our rebuttal sufficiently addressed your initial concerns, we kindly ask you to reconsider the scoring in light of the clarifications provided. In case of any additional questions and concerns on our work that led to your decision to maintain your score, we would be grateful if you could share them with us. Your insights are invaluable, and we are committed to improving our work based on your guidance.\\n\\nSincerely,\\nAuthors of Submission11013.\"}",
"{\"comment\": \"> [W6] In Table 1, to make the table more readable, the data of important results should be highlighted via bolding the best result in each row or using color coding to show relative performance between methods.\\n\\n→ We agree with the reviewer. We will revise Table 1 to highlight best results per model size and also add relative performance gains/losses with color coding for better readability.\\n\\n---\\n> [W7] There are some writing flaws.\\n\\n→ We thank the reviewer for spotting these writing errors. We will revise the draft to correct the errors.\\n\\n---\\n> [Q1] In Introduction Section (Line 72-74), can you explain more about the reason that low-rankness can be beneficial in stabilizing optimization and preventing catastrophic forgetting? Clearer illustration would be better here.\\n\\n→ Intuitively, the weight changes that can be made within the LLM is constrained by the low-rank structure of LoRA. Compared to full parameter fine-tuning, this constraint effectively prevents the model from \\\"changing too much\\\" from the pretrained model, thereby better retaining previously acquired knowledge. As we will revise Figure 1 to illustrate our methods, we will also aim to depict this notion of stability from low-rankness in the figure.\\n\\n---\\n> [Q2] In Table 1, why you record results from running different epochs? Does it mean the method reaches the optimal with these epochs?\\n\\n→ For the TDEC experiments shown in Table 1, we follow previous work [A] and stop the unlearning process when the model meets a threshold based on $n$-gram and token-wise overlaps against the unlearning target sequences: after each unlearning epoch, we measure (1) the $n$-gram Extraction Likelihood and (2) Memorization Accuracy of the model on unlearning target sequences, then compare those to the same measurements obtained from a held-out validation set. If the values are smaller than those from the held-out set, it means the model generates target sequences similarly to unforeseen sequences, indicating successful unlearning.\\n\\n---\\n> [Q3] In Experiments Section, why different LLMs are used for those two tasks? Have you evaluated more popular and larger LLMs such as Llama3.1-8B? I suggest giving explanation of the strategy and purpose of model selection.\\n\\n→ For our TDEC experiments, we choose to use the GPT-Neo family following previous work [C]. Note that because TDEC consists of unlearning targets chosen from the Pile dataset [D], our model choice is limited to those known to be pretrained on the Pile corpus, which excludes more recent larger LLMs such as Llama3.1-8B.\\n\\nFor our TOFU experiments, we use Phi-1.5B and Llama2-7B models following the original benchmark paper [E]. Not to mention the easy reproducibility from publicly available base models from which we run unlearning, this also ensures that our empirical findings are directly comparable with original results from the TOFU paper [E] as well as results from papers that experiment unlearning on TOFU [A, F].\\n\\n[A] [Gao et al., Practical Unlearning for Large Language Models. arXiv 2024.](https://arxiv.org/abs/2407.10223)\\n\\n[B] [Gundavarapu et al., Machine Unlearning in Large Language Models. arXiv 2024.](https://arxiv.org/abs/2405.15152)\\n\\n[C] [Jang et al., Knowledge unlearning for mitigating privacy risks in language models. ACL 2023.](https://arxiv.org/abs/2210.01504)\\n\\n[D] [Gao et al., The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv 2020.](https://arxiv.org/abs/2101.00027)\\n\\n[E] [Maini et al., TOFU: A Task of Fictitious Unlearning for LLMs. arXiv 2024.](https://arxiv.org/abs/2401.06121)\\n\\n[F] [Zhang et al., Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. arXiv 2024.](https://arxiv.org/abs/2404.05868)\"}",
"{\"comment\": \"> [W1] It is not clear why IHL does not affect the probability of other tokens. Based on the definition of IHL (Line 233), the probability of all other tokens is impacted. As such, IHL can only address problem 1 (Line 224) but cannot address problems 2 and 3 of GA (Lines 224-226).\\n\\n→ To address this question, we will briefly clarify our approach based on the analysis of the derivative of $\\\\mathcal{L}\\\\_{IHL}$ presented in Lines 235\\u2013253 of the manuscript. As stated in Lines 249\\u2013250, the proposed $\\\\mathcal{L}\\\\_{IHL}$ does not entirely ignore the logits of tokens other than the true token and the second-highest prediction token; instead, it allows these logits to increase at a relatively slower rate. \\n\\nFor example, with the standard $\\\\mathcal{L}\\\\_{GA}$, as shown in the derivative in Line 218, the logits for all tokens except the true token are trained to increase proportionally to their current logit values. In this case, if the logit for the true token $x_t$ decreases, the logits of the other tokens increase even more significantly. \\n\\nHowever, the derivative of $\\\\mathcal{L}\\\\_{IHL}$ in Line 238 demonstrates a different learning pattern. Specifically, if unlearning has not yet succeeded (i.e., when $p\\\\_\\\\theta(x_t | x\\\\_{<t})$ is still greater than $p\\\\_\\\\theta(v^* | x\\\\_{<t})$), the derivative for tokens where $v \\\\neq x_t$ and $v \\\\neq v^*$ shows that the gradient for other tokens scales with $p_\\\\theta(v|x_{<t})$ by a factor equal to the difference between $p\\\\_\\\\theta(x_{t}|x_{<t})$ and $p\\\\_\\\\theta(v^{*}| x_{<t})$. This results in only a small fraction of the $p\\\\_\\\\theta(v | x_{<t})$ logit increase. As a result, compared to $\\\\mathcal{L}\\\\_{GA}$, this leads to a relatively slower increase in the logits of other tokens.\\n\\n---\\n> [W2] In Figures 3 and 5, the unlearning performance of employing only IHL does not outperform the GD baseline, which undermines the effectiveness of IHL.\\n\\n→ We strongly assert that, the merit of IHL mainly lies in its superior stability in retaining knowledge and generative performance, and that our experimental results are indicative of this strength. \\n\\nIn our TDEC experiments (\\u00a74.1), we observe that IHL achieves superior stability (i.e., overcoming catastrophic forgetting during unlearning) compared to GA in most cases. Specifically, when comparing the results in Table 1 between LoRA (using GD) and LoRA+IHL (where GA in GD is replaced by IHL), we find that LoRA+IHL consistently outperforms LoRA in Reasoning, Dialogue, and Pile. Additionally in Figure 3, when comparing the results from GD (blue color) and IHL (orange color), we find that, except for certain cases with GPT-Neo-1.3B (e.g., rank = 32 for Dialogue), IHL outperforms GD in almost all ranks for Reasoning, Dialogue, and the Pile. \\n\\nAlso in our TOFU experiments (\\u00a74.3), Figure 5 shows that IHL (green color, replacing GA with IHL) consistently shows negligible decrease in model utility, whereas GD (orange color, using GA) quickly loses its previously acquired knowledge, deviating significantly from the trajectory towards the Retain Set Only oracle (marked as $\\\\star$). Based on these experimental results, we hope the reviewer reconsiders the empirical superiority of IHL over GA.\\n\\n---\\n> [W3] The main results only use GPT-neo models, which are old models. It is better to use more recent models like Llama and Mistral models to make it more practically useful. It is also inconsistent to use different models for main results and analysis.\\n\\n→ We would like to clarify that our experimental models are deliberately chosen for meeting unlearning evaluation requirements and maintaining consistency against benchmark standards. \\n\\n- For our TDEC experiments (\\u00a74.1), we choose to use the GPT-Neo family following previous work [A]. Note that because TDEC consists of unlearning targets chosen from the Pile dataset [B], our model choice is limited to those known to be pretrained on the Pile corpus, which excludes more recent larger LLMs such as Llama3.1-8B.\\n- For our TOFU experiments (\\u00a74.3), we use Phi-1.5B and Llama2-7B models following the original benchmark paper [C]. Not to mention the easy reproducibility due to publicly available base models from which we run unlearning, this also ensures that our empirical findings are directly comparable with original results from the TOFU paper [C] as well as results from papers that experiment unlearning on TOFU [D, E].\"}",
"{\"summary\": \"The paper works on machine unlearning in LLMs, particularly focusing on the challenges of removing specific data instances from a model's memory without retraining from scratch. The authors propose two strategies: Inverted Hinge Loss (IHL) and Fisher-Initialization of Low-rank Adapters (FILA). IHL is designed to replace the unbounded negative cross-entropy loss in gradient ascent with a more stable and efficient loss function. FILA aims to initialize low-rank adapters in a way that prioritizes the removal of unwanted information and accelerates the unlearning process. Extensive experiments validates that the proposed methods significantly outperform existing baselines in efficiency and post-unlearning performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Authors analyze the derivatives of GA and highlight its shortcomings, the motivation is clear and the theoretical foundation strengthens the rationale for the proposed methods.\\n2. The introduction of IHL addresses the instability issues of GA by focusing gradient updates on a minimal number of viable replacements for the ground-truth token. This results in a more controlled and stable unlearning process.\\n3. The proposed strategies are effective. The authors evaluate the methods on multiple datasets and multiple model sizes. This comprehensive evaluation demonstrates the robustness and generalizability of the proposed methods.\", \"weaknesses\": \"The intuition and connection between the proposed methods, IHL and Fisher-Initialization of FILA, appear somewhat weak. This makes the paper feel like it is stacking two separate tricks rather than offering a unified and coherent approach. A more systematic linkage between these methods would enhance the overall coherence and impact of the paper.\", \"questions\": \"How robust are the proposed methods to changes in the data distribution of the forget set? For instance, if the forget set contains highly diverse or outlier data, would the unlearning process still be effective?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"> [W4] There are no ablations studies for the following settings: 1) full parameter fine-tuning with IHL; 2) LoRA + FILA only; 3) GD + LoRA + FILA.\\n\\n→ We appreciate the reviewer for the suggestions. For further insights, we will add the following ablation studies.\\n- Full parameter fine-tuning with IHL as well as other methods.\\n- GD + LoRA initialized with FILA. Note this is already presented for TDEC in Figure 3 in green. We will add it to the TOFU experiments as well.\\n\\nRegarding \\\"2) LoRA + FILA only\\\", could the reviewer clarify which setup this is referring to? It is unclear which objective function the model should be trained on.\\n\\n---\\n> [Q1] In Figure 3, why are some data points missing?\\n\\n→ We would like to clarify that no data points are missing in Figure 3: each unlearning method has 5 data points on each plot corresponding to 5 different forget set. It is possible that some cases where unlearning was not successful within 20 epochs (marked with $\\\\times$) appear missing due to large overlaps against successful cases (marked with $\\\\circ$). For better readability, we will adjust Figure 3 to make the data points more distinguishable.\\n\\n---\\n> [Q2] It is better to add legends in Figures 3 and 4 to improve the clarity.\\n\\n→ We appreciate the reviewer's suggestion. We will revise Figures 3 and 4 to insert a legend to indicate the method-color correspondence currently covered in the captions.\\n\\n---\\n> [Q3] It is better to define \\u201cModel Utility\\u201d within the paper instead of referring readers to other papers.\\n\\n→ We will add the definition of Model Utility in Section 4.3 for clarity. For reference, Model Utility measures the extent to which the model retains useful information after unlearning. This is done by aggregating (1) the probability of correct answers, (2) ROUGE-L scores, and (3) Truth Ratio of correct answers vs. incorrect answers on questions from three datasets covering the retain set of fictitious authors, real authors, and world facts.\\n\\n---\\n> [Q4] For the Hinge loss equation in Line 233, since the probability $p(\\\\cdot)$ is in the range of (0,1), the second item within $\\\\max(\\\\cdot)$ function is always larger than 0, right? If so, IHL is to reduce the probability of true tokens but to increase the probability of other tokens, right?\\n\\n→ As mentioned in [W1], the reviewer is correct that IHL increases the logits of other tokens when unlearning for $x_t$ is not yet complete (i.e., $p(x\\\\_t | x\\\\_{<t}) > p(v^* | x\\\\_{<t})$). However, it behaves differently once unlearning is achieved (i.e., $p(x_t | x_{<t}) < p(v^* | x_{<t})$). Examining the derivative in Line 238 for tokens $v \\\\neq x_t$ and $v \\\\neq v^*$, we observe that when $p(x_t | x_{<t}) < p(v^* | x_{<t})$, the final gradient becomes positive. As explained in Lines 250\\u2013253, this indicates that when unlearning is complete for $x_t$, the logits for other tokens are learned to decrease by a very small fraction of $p(v | x_{<t})$. This unique property not only distinguishes $\\\\mathcal{L}\\\\_{IHL}$ from $\\\\mathcal{L}\\\\_{GA}$ which consistently increases other tokens\\u2019 logits, but also reinforces the bounded nature of $\\\\mathcal{L}\\\\_{IHL}$.\\n\\nWe also agree with the reviewer that the second item within $\\\\max(\\\\cdot)$ is always larger than 0, and will thus will revise the formulation of IHL to\\n$$\\n\\\\mathcal{L}\\\\_{\\\\text{IHL}} = 1 + p\\\\_\\\\theta (x_t | x_{<t}) - \\\\max\\\\_{v \\\\neq x_t}(p\\\\_\\\\theta (v | x_{<t}))\\n$$Originally, the outer $\\\\max(\\\\cdot)$ was placed when testing margin terms other than 1, but since using 1 as in the original hinge loss worked best, we adhere to the formulation above.\\n\\n\\n[A] [Jang et al., Knowledge unlearning for mitigating privacy risks in language models. ACL 2023.](https://arxiv.org/abs/2210.01504)\\n\\n[B] [Gao et al., The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv 2020.](https://arxiv.org/abs/2101.00027)\\n\\n[C] [Maini et al., TOFU: A Task of Fictitious Unlearning for LLMs. arXiv 2024.](https://arxiv.org/abs/2401.06121)\\n\\n[D] [Gao et al., Practical Unlearning for Large Language Models. arXiv 2024.](https://arxiv.org/abs/2407.10223)\\n\\n[E] [Zhang et al., Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. arXiv 2024.](https://arxiv.org/abs/2404.05868)\"}",
"{\"comment\": \"> [W1] A more systematic linkage between IHL and FILA would enhance the overall coherence and impact of the paper.\\n\\n→ We thank the reviewer for bringing this point. We would like to clarify that FILA is specifically designed to cope with the shortcoming of IHL, namely its slow unlearning speed.\\n\\nAs observed in all our experimental results, replacing the negative cross-entropy loss with our IHL leads to superior retention in previously acquired knowledge and generative capabilities, but increases the number of epochs required to fully forget the unlearning targets. Therefore, FILA is designed to accelerate the unlearning process while enjoying the knowledge retention capability of IHL. In Figure 3, note that applying only FILA on top of GD easily leads to significant loss in overall performance, implying that the stability of IHL and the efficiency from FILA form a strong synergy in LoRA-based LLM unlearning.\\n\\nWe will revise the Introduction and Proposed Method sections to clarify this linkage.\\n\\n---\\n> [Q1] How robust are the proposed methods to changes in the data distribution of the forget set?\\n\\n→ While limited to forget sets designated by the TDEC and TOFU benchmarks, we believe our experiments demonstrate the robustness of our methods IHL and FILA under varying data diversity. Specifically, the forget sets in TDEC contain sequences from a wide variety of sources such as Github code, the Pile CC, Books3, Freelaw, etc. [A], whereas forget sets in TOFU consist of similarly formatted question-answer pairs on fictitious author profiles. Despite this distinction, our method attains best performance on both setups and hence we can expect our methods to be effective in both scenarios with diversely or uniformly distributed forget set distributions.\\n\\n[A] [Gao et al., The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv 2020.](https://arxiv.org/abs/2101.00027)\"}",
"{\"summary\": \"The paper focus on the problem of unstable optimization, catastrophic forgetting, and computational cost from Gradient Ascent for LLM unlearning, and propose two novel techniques, including the Inverted Hinge Loss and Fisher Information weighted initialization of LoRA adapters, for robust and efficient unlearning for LLMs. Experiments on two tasks with different LLMs show that the proposed methods enable faster and more stable LoRA-based LLM unlearning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper makes a good contribution to knowledge unlearning of LLMs through improving tuning stability and efficiency with Inverted Hinge Loss and Fisher-Initialization of Low-rank Adapters, respectively. The proposed method is valuable of managing unnecessary forgetting and unbounded optimization in typical Gradient Ascent strategies.\\n2. The paper is generally well-written, particularly in formula derivation and clear explanation about IHL and FILA solve the weaknesses of GA in Section 3.3 and Section 3.4.\\n3. The experiments and analysis of the paper is comprehensive, with great illustration on performance evaluation using high quality charts.\", \"weaknesses\": \"1. In Introduction Section (Line 51-53), you mention GA and its shortcomings, I think a better way of writing here would be providing a brief overview of 2-3 other key knowledge unlearning approaches beyond GA, and summarize 1-2 common shortcomings across these methods that motivate the proposed approach. GA should be only one of those existing typical methods.\\n\\n2. In Introduction Section (Line 76), you mention the application of LoRA to LLM unlearning remains unexplored, however, there are some existing studies using LoRA for LLM unlearning, including Practical Unlearning for Large Language Models (https://arxiv.org/abs/2407.10223) and Machine Unlearning in Large Language Models (https://arxiv.org/abs/2405.15152v1). It would be better to briefly summarize (in 1-2 sentences each) how LoRA was used for unlearning in these two papers, and then explain how their proposed approach differs or improves upon these methods.\\n\\n3. Some important content is missing. In Introduction Section, a lack of clear summarization of contributions in the paper, making readers difficult to capture the important points of the study. Besides, in Related Work Section, a brief comparison between your proposed method and other relevant studies should be presented to better emphasize the advantages of your work. \\n\\n4. In Section 3.3 (Line 223-227), you should provide a more detailed explanation of how you arrived at these hypotheses. There is still a gap between GA motivation and its weaknesses. To make the illustration more convincible here, a better way would be providing a specific example or mathematical derivation showing how the GA loss function leads to one of the stated problems (e.g., unbounded optimization or inefficient gradient updates).\\n\\n5. In Section 3.4 (Line 265), a basic and brief description of Fisher Information is necessary here for better understanding of the reason you employ it to address the importance quantification you mentioned before.\\n\\n6. In Table 1, to make the table more readable, the data of important results should be highlighted via bolding the best result in each row or using color coding to show relative performance between methods, in order to show either your FILA LoRA performs much better than traditional LoRA, or it can approach the performance of full fine-tuning.\\n\\n7. There are some writing flaws:\\n* Some sentences are too long to follow and comprehend, such as Line 89-93.\\n* In Line 419-421, there are two \\\"Second\\\" in these two sentences, making them difficult to be understood.\\n* The capitalization in the text should be more consistent. For instance, you use lowercase \\\"l\\\" in \\\"Inverted Hinge loss\\\" at Line 20 and Line 161, but uppercase \\\"L\\\" in \\\"Inverted Hinge Loss\\\" at Line 82. All uppercase would be better.\", \"questions\": \"1. In Introduction Section (Line 72-74), can you explain more about the reason that low-rankness can be beneficial in stabilizing optimization and preventing catastrophic forgetting? Clearer illustration would be better here.\\n\\n2. In Table 1, why you record results from running different epochs? Does it mean the method reaches the optimal with these epochs?\\n\\n3. In Experiments Section, why different LLMs are used for those two tasks? Have you evaluated more popular and larger LLMs such as Llama3.1-8B? I suggest giving explanation of the strategy and purpose of model selection.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response\", \"comment\": \"I have read the new version of the paper, and I have no further questions. I will improve my score.\"}",
"{\"title\": \"Follow-up Response to Reviewer EhHm (2/2)\", \"comment\": \"---\\n> [W4.1] In Table 1, the results of \\u201cGD + LoRA initialized with FILA\\u201d are also missing. The settings are to demonstrate the effectiveness of using FILA alone.\\n\\n→ As mentioned in our response for [W2], the main role of FILA is to accelerate tuning on $\\\\mathcal{D}_f$ via proper initialization, and whether this speed up brings performance benefits after unlearning hinges upon the loss function being optimized for unlearning. \\n\\nThat being said, we have updated our manuscript to include results from GD+FILA in Table 1. Comparing GD vs. GD+FILA, we can see that the number of epochs needed to successfully unlearn $\\\\mathcal{D}_f$ decreases consistently when incorporating FILA with GD, but the downstream performance worsens in many cases (for example, GPT-Neo-1.3B performs worse with GD+FILA on all aspects than just GD). This demonstrates that accelerating the divergent behavior of the GA loss in GD with FILA can worsen downstream performance. On the other hand, FILA becomes beneficial when paired with a stable and bounded objective such as IHL, which is shown in Table 1 with consistent performance boosts from IHL to IHL+FILA.\\n\\n---\\n> [W4.2] For \\u201cLoRA + FILA only\\u201d, I am referring to the loss function GA.\\n\\n→ To clarify, GD is our primary baseline representing GA as (1) GD is equal to GA plus a regularizer on the retain set for better knowledge retention (i.e. $\\\\mathcal{L}\\\\_{\\\\text{GD}} = \\\\mathcal{L}\\\\_{\\\\text{GA}}(\\\\mathcal{D}\\\\_f) + \\\\mathcal{L}\\\\_{\\\\text{LM}}(\\\\mathcal{D}\\\\_r)$) and (2) our preliminary results on TDEC (Table 1) have shown that GD consistently outperforms GA under full-parameter tuning. Notably, GA leads to drastic loss of generative capabilities for GPT-Neo-125M.\\n\\nFor further information, we have run GA+FILA on the TOFU benchmark, results on which can be found in Figure 7 of the Appendix. Similar to GD vs. GD+FILA, we find that GA+FILA suffers from even faster loss in model utility than GA, again reflecting how the divergent behavior is exacerbated with FILA as discussed above.\\n\\nWe are also conducting GA+FILA experiments on our TDEC benchmark. However, due to the high time cost of unlearning evaluation, these results will likely only be available after the rebuttal period. That said, we anticipate that GA+FILA will show similar underperformance as GD+FILA, and will include these findings in the Appendix. We again ask for the reviewer\\u2019s understanding of these constraints.\"}",
"{\"title\": \"Revised Manuscript Available for Further Review\", \"comment\": [\"We would like to inform the reviewers that we have updated our manuscript for further review. As our discussion phase is closing within a few days, we respectfully ask the reviewers to review our revised manuscript, and let us know in case of any additional questions or concerns.\", \"For your reference, the revised portions within the manuscript can be found in blue-colored text. Below is our summary of revisions:\", \"1. Writing\", \"**Lines 50-75 and 83-87** [sxtU, AAXs]: We have added references to other LLM unlearning methods and brief discussions on them in the Introduction section.\", \"**Lines 102-107** [AAXs]: We have added a summary of the main contributions of our paper.\", \"**Lines 159-161** [AAXs]: We have added a brief comparison between our study and other relevant studies.\", \"**Lines 216-227 and 242-255** [EhHm, sxtU, AAXs]: We have added further details on how the gradients of GA reflects its issues, and how our IHL formulation resolves those issues from a probabilistic perspective.\", \"**Lines 258-269** [T4eT]: We have revised the Motivation paragraph of Section 3.4 to clarify the linkage between IHL and FILA.\", \"**Lines 270-273** [AAXs]: We have added explanations on the mathematical and intuitive definition of Fisher information.\", \"**Lines 420-424** [EhHm, sxtU]: We have clarified the observation that even without FILA, IHL outperforms GD in many cases with both full-parameter and LoRA-based tuning.\", \"**Lines 462-465** [EhHm]: We have added details on how the Model Utility metric is measured in TOFU.\", \"**Writing flaws** [AAXs]: We thank the reviewer again for spotting these errors. We have corrected the errors in the revision. Additionally, overly long or verbose sentences have been revised for clarity and conciseness, particularly in the Introduction and Related Work sections.\", \"2. Figures and Tables\", \"**Figure 1** [sxtU]: We have revised Figure 1 to contain specific illustrations of our IHL and FILA methods in addition to the overall LLM unlearning pipeline.\", \"**Table 1** [AAXs]: We have denoted the best performance as bold for each column, and also included color-coded changes in performance after unlearning.\", \"**Figures 3 and 4** [EhHm]: We have added legends to both figures and made the $\\\\times$ markers denoting cases where unlearning was unsuccessful larger for better visibility.\", \"3. Additional Experimental Results\", \"**IHL with full-finetuning on TDEC (Table 1)** [EhHm]: We have added results from running full-parameter unlearning with IHL on the TDEC dataset. We can see that while GA and GD leads to significant loss in generative performance especially with smaller models (GPT-Neo-125M), IHL exhibits superior stability, minimizing the performance gap with the base model consistently across all GPT-Neo models.\", \"**GD+FILA Results on TOFU (Figure 5)** [EhHm]: For completeness, we have added results using FILA with GD to our TOFU experiments, and made the color-coding consistent with Figures 3 and 4 for better readability. Results indicate that the model utility degrades quickly with GD+FILA, as observed in our TDEC experiments.\", \"**Comparison vs. KL, DPO, and NPO in TOFU (Figure 7 in Appendix)** [sxtU]: We have run additional experiments on TOFU with three existing baselines: KL [A], DPO [B], and NPO [C]. We found that our IHL+FILA consistently outperforms all three baselines, as all three methods lead to significant degradation in model utility.\", \"[A] [Maini et al., TOFU: A Task of Fictitious Unlearning for LLMs. arXiv 2024.](https://arxiv.org/abs/2401.06121)\", \"[B] [Rafailov et al., Direct Preference Optimization: Your Language Model is Secretly a Reward Model. NeurIPS 2023](https://arxiv.org/abs/2305.18290)\", \"[C] [Zhang et al., Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. arXiv 2024.](https://arxiv.org/abs/2404.05868)\"]}",
"{\"metareview\": \"The paper addresses the challenge of removing sensitive information from Large Language Models (LLMs) without retraining, proposing two techniques: Inverted Hinge Loss (IHL) and Fisher-weighted Initialization of Low-rank Adapters (FILA). IHL aims to stabilize unlearning by focusing on the next most likely token, while FILA uses Fisher information to initialize LoRA adapters, selectively targeting parameters associated with unwanted information to optimize unlearning. The methods were tested on GPT-Neo, Phi-1.5B, and Llama2-7B models, showing efficient unlearning with minimal performance loss. All reviewers agree on the importance of the work and mention that the provided methodology is clearly justified theoretically and empirically. Moreover, the authors have put the effort to modify the manuscript based on the provided reviews and address all their comments. There are suggestions for improvement including clearer explanations (i.e., connection between the two methods are unclear), more recent model evaluations, and better scaling analysis.\", \"additional_comments_on_reviewer_discussion\": \"The original scores were 5 6 5 but after the rebuttal the authors could improve the writing and add extra experimental results to convince the reviewers to increase the scores to 6 6 6.\"}"
]
} |
1Euu8FPr3d | Unsupervised Multi-Agent Diversity With Wasserstein Distance | [
"Tianxu Li",
"Kun Zhu"
] | In cooperative Multi-Agent Reinforcement Learning (MARL), agents sharing policy network parameters are observed to learn similar behaviors, which impedes efficient exploration and easily results in the local optimum of cooperative policies. In order to encourage multi-agent diversity, many recent efforts have contributed to distinguishing different trajectories by maximizing the mutual information objective, given agent identities. Despite their successes, these mutual information-based methods do not necessarily promote exploration. To encourage multi-agent diversity and sufficient exploration, we propose a novel Wasserstein Multi-Agent Diversity (WMAD) exploration method that maximizes the Wasserstein distance between the trajectory distributions of different agents in a latent representation space. Since the Wasserstein distance is defined over two distributions, we further extend it to learn diverse policies for multiple agents. We empirically evaluate our method in various challenging multi-agent tasks and demonstrate its superior performance and sufficient exploration compared to existing state-of-the-art methods. | [
"Multi-Agent Reinforcement Learning",
"Multi-Agent diversity",
"Cooperation",
"Wasserstein Distance"
] | Reject | https://openreview.net/pdf?id=1Euu8FPr3d | https://openreview.net/forum?id=1Euu8FPr3d | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wWDAq7tBXq",
"tQOeLsrxVb",
"mrNe2attzZ",
"mp6y8zPhV9",
"m3hEPfySqM",
"klaDNEBZcq",
"cJyW37BvJx",
"ZtRVgXaQf3",
"Tkx5x1vXUC",
"TJuWNKLY7X",
"QhGWVUcHQf",
"Q9f98S5LFC",
"OmbpAOsbM5",
"NeD7s2LlZr",
"EGg8yqJBJl",
"CfnEIsXCob",
"Bmr0kpRVFN",
"3XM6JFhLbf"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1733035448868,
1729424083522,
1732527301793,
1732631427528,
1730685536641,
1734911205245,
1732410405536,
1732542594729,
1733035559907,
1732439520169,
1730299970179,
1737524060529,
1732410514252,
1732410276877,
1732675327125,
1732679207510,
1732410234446,
1729944649993
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Reviewer_VJC3"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Reviewer_C1n6"
],
[
"ICLR.cc/2025/Conference/Submission10544/Reviewer_z99x"
],
[
"ICLR.cc/2025/Conference/Submission10544/Area_Chair_hDCh"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Reviewer_z99x"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Reviewer_e3vP"
],
[
"ICLR.cc/2025/Conference/Submission10544/Reviewer_C1n6"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10544/Reviewer_e3vP"
]
],
"structured_content_str": [
"{\"comment\": \"We hope the responses above have addressed your concerns. We would appreciate receiving your feedback.\"}",
"{\"summary\": \"This paper proposes Wasserstein Multi-Agent Diversity (WMAD), a new method for promoting exploration in Multi-Agent Reinforcement Learning (MARL). Unlike mutual information-based approaches, WMAD maximizes the Wasserstein distance between agents' trajectory distributions to encourage diverse behaviors. The method leverages Contrastive Predictive Coding (CPC) to learn trajectory representations and introduces a nearest neighbor intrinsic reward based on the Wasserstein distance. WMAD achieves more diverse policies and better exploration, outperforming state-of-the-art methods in complex multi-agent tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a new approach by using the Wasserstein distance to promote agent diversity, addressing the limitations of mutual information-based methods in encouraging effective exploration.\", \"Using contrastive predictive coding for learning distinguishable trajectory representations enhances the ability to measure differences between agents\\u2019 behaviors.\", \"The method is evaluated across multiple challenging multi-agent environments (like Pac-Men, SMAC, and SMACv2), demonstrating consistent outperformance over baseline methods. The proposed approach is integrated with MARL algorithms like QMIX, showing its practical applicability and potential to improve real-world cooperative learning tasks.\", \"The paper is organized clearly and it is easy to follow its core idea.\"], \"weaknesses\": [\"The Wasserstein distance relies on an appropriate cost function to measure trajectory differences, and the paper uses a simple Euclidean distance without exploring task-specific alternatives, which may limit the method\\u2019s adaptability.\", \"Although the paper employs kernel-based techniques to reduce costs, computing the Wasserstein distance for every pair of agents in large-scale multi-agent systems can still be computationally intensive, making the system not scalable.\", \"The paper does not thoroughly explore the sensitivity of the method to key parameters, such as the choice of kernel or the weighting of intrinsic rewards, which could affect generalizability.\"], \"questions\": [\"Have you explored alternative cost functions tailored to specific tasks, and if so, how did they impact the results?\", \"Although you adopted a kernel-based method to reduce computational costs, what challenges did you face in scaling the method to larger multi-agent systems? Will this limit the proposed method's scalability?\", \"How sensitive is your method to the selection of hyperparameters, such as the kernel width or the coefficient for intrinsic rewards? Did you conduct any sensitivity analysis to understand their impact?\", \"Since the effectiveness of your method heavily relies on CPC for trajectory representation, how robust is the learned representation to noise or perturbations in agent observations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your response. Trajectory representation learning in our method is necessary. It enlarges the distance between trajectories in a latent space, enabling the proper functioning of the Wasserstein distance. We propose a novel next-step prediction method based on CPC to learn distinguishable trajectory representations. As a result, by using our method, we do not need to use per-agent policy networks to introduce heterogeneous behaviors, unlike previous works such as CDS [1] and DiCo [2], which significantly reduces the number of parameters. This idea is novel and has never been proposed in previous works.\\n\\n[1] Li, Chenghao, et al. \\\"Celebrating diversity in shared multi-agent reinforcement learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 3991-4002.\\n \\n[2] Bettini M, Kortvelesy, et al. \\u201cControlling Behavioral Diversity in Multi-Agent Reinforcement Learning\\u201d International Conference on Machine Learning (2024)\"}",
"{\"comment\": \"Considering the efforts the authors have made in their response, I have decided to increase my score. However, through the author's response, the novelty is still somewhat incremental from my perspective, like other reviewers, especially with the comparison with other MARL algorithms including Wasserstein distance. Besides, I am uncertain about the effectiveness and stability of the performance gains achieved by the method proposed by the authors.\"}",
"{\"summary\": \"This paper proposes a novel approach to multi-agent policy diversity within the MARL domain. Firstly, the paper provides a detailed analysis of the shortcomings of current diversity methods based on mutual information. Subsequently, it leverages a CPC-based next-step prediction method to facilitate the learning of distinguishable representations of agent trajectories. Furthermore, it introduces a method for rapidly calculating the Wasserstein distance in multi-agent systems, which is integrated into practical MARL algorithms in the form of intrinsic rewards. Finally, the effectiveness of the proposed method is validated on the Pac-Men, SMAC/SMACv2 environments.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly articulated and well-structured, with the discussion on MI-based methods being particularly enlightening.\\n2. The discussion in the experimental section is comprehensive, with a thorough design of ablation studies.\", \"weaknesses\": \"Although the paper is well-written, I have major concerns regarding the novelty of the paper.\\n1. The first is the introduction of Wasserstein Distance (WD) to quantify the policy diversity (as represented by trajectories) among agents, where there has already been related work in the MARL domain, which may not represent a significant innovation. For example, work [1] introduces the concept of system neural Diversity based on WD, and work [2] proposes a policy distance concept also based on WD by learning representations of policy distributions. \\n2. The second concern is about translating the diversity's WD into intrinsic rewards to encourage diversity. In fact, methods purely encouraging diversity are not limited to intrinsic rewards or objective functions but also include controlling the network structure. Work [3] has even gone beyond merely encouraging diversity to being able to control the diversity of a multi-agent system to a specific value. Therefore, this work might not be novel enough to match the ICLR community. \\n3. Correspondingly, there are concerns regarding the selection of baselines. Since this is a method encouraging multi-agent diversity, why has it only been compared with MI-based methods? Baselines should include MARL diversity-related but MI-unrelated methods. For example, RODE [4], ADMN[5], and the previously mentioned methods?\\n\\n*I hope the authors can understand my concerns and address them together with the following questions.*\\n\\n\\n[1] Bettini M, Shankar A, et al. \\u201cSystem neural diversity: measuring behavioral heterogeneity in multi-agent learning\\u201d[J]. arXiv preprint arXiv:2305.02128, 2023.\\n\\n[2] Hu T, Pu Z, Ai X, et al. \\u201cMeasuring Policy Distance for Multi-Agent Reinforcement Learning\\u201d International Conference on Autonomous Agents and Multiagent Systems (2024)\\n\\n[3] Bettini M, Kortvelesy, et al. \\u201cControlling Behavioral Diversity in Multi-Agent Reinforcement Learning\\u201d International Conference on Machine\\nLearning (2024) \\n\\n[4] T. Wang, T. Gupta, A. Mahajan, et al. \\u201cRODE: Learning Roles to Decompose Multi-Agent Tasks\\u201d International Conference on Learning Representations \\uff082021\\uff09\\n\\n[5]Yu Y, Yin Q, Zhang J, et al. \\u201cADMN: Agent-Driven Modular Network for Dynamic Parameter Sharing in Cooperative Multi-Agent Reinforcement Learning\\u201d\\nInternational Joint Conference on Artificial Intelligence \\uff082024\\uff09\", \"questions\": \"Beyond the major concerns I have listed, there are the following questions:\\n1. Can this method be applied to agents in continuous action spaces, and to multi-agents with different action spaces? \\n2. Regarding the discussion in lines 171-173 of the paper, can the authors provide an example to illustrate this point, and why wouldn't the WD directly become zero?\\n3. Compared to vanilla methods like QMIX and QTRAN, WMAD will undoubtedly introduce additional computational overhead. If WMAD appears to require fewer timesteps to achieve comparable performance levels, but demands more CPU/GPU computation time \\uff08or real time\\uff09, could this impact its practical use? Have there been any experiments conducted to assess the extent of this additional computational load?\\n\\n4. WMAD chooses the Euclidean distance as the cost function to compute the Wasserstein distance. I am curious about the results if the Euclidean distance were used directly as the intrinsic reward.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper proposes Wasserstein Multi-Agent Diversity (WMAD), a new method for promoting exploration in Multi-Agent Reinforcement Learning (MARL). Unlike mutual information-based approaches, WMAD maximizes the Wasserstein distance between agents' trajectory distributions to encourage diverse behaviors. The method leverages Contrastive Predictive Coding (CPC) to learn trajectory representations and introduces a nearest neighbor intrinsic reward based on the Wasserstein distance. WMAD achieves more diverse policies and better exploration, outperforming state-of-the-art methods in complex multi-agent tasks.\\n\\nThe main concern shared among reviewers is the novelty, as WMAD just replaces the existing diversity-promoting methods with Wasserstein distance. The AC agrees and thus recommends rejection.\", \"additional_comments_on_reviewer_discussion\": \"The main concern shared among reviewers is the novelty, as WMAD just replaces the existing diversity-promoting methods with Wasserstein distance. This concern was not fully addressed in the rebuttal.\"}",
"{\"comment\": \"Thank you for your careful review.\", \"weakness_1\": \"The ... distance.\\n\\nPromoting diversity for enhanced exploration is an emergent research direction. Our method solves the limitation of the mutual information-based method, which lacks diversity metric, and encourages sufficient exploration. Moreover, our contributions are different from prior works using the Wasserstein distance. Prior works overlook the problem of applying the Wasserstein distance in multi-agent settings, where the parameter-sharing policy network can lead to homogeneous policies, thereby undermining the proper functioning of the Wasserstein distance. The Wasserstein distance between any two agents' trajectory distributions approaches zero, i.e., $W(X,Y)\\\\rightarrow 0$, where $X$ and $Y$ respectively represent the trajectory distributions of two agents. The ablation results show that simply using the Wasserstein distance without representation learning leads to a significant performance drop. To solve the problem, the main contribution of our method lies in representation learning with CPC as we discussed in the last paragraph of Section 1 in our paper. We consider a latent representation space to make the Wasserstein distance meaningful. We construct this representation space using the Contrastive Predictive Coding (CPC) method to learn distinguishable trajectory representations.\", \"weakness_2\": \"According ... scenarios.\\n\\nWe analyze the performance of our method in scenarios requiring homogeneous behaviors in the last paragraph of Section 5.2: \\\"Moreover, it is notable that our method also achieves satisfactory performance in the easy 3s5z scenario where agents sometimes need to behave in the same way to master the trick of 'focus fire', demonstrating that our method would not prevent the homogeneous behaviors that can lead to more environmental rewards. More experimental results related to such homogeneous behaviors can be found in Appendix G.2. These results reveal that our method efficiently balances exploration and exploitation, resulting in the learning of optimal cooperative policies.\\\"\", \"weakness_3\": \"The ... respectively.\\n\\nIn Appendix I, we detail the hyperparameters and network structures used in our experiments. To ensure a fair comparison, we use consistent hyperparameters and the same network structure for all baselines. We implement our method based on PyMARL2. To compare the performance of our method under different settings, we test our method under the hyperparameter settings in PyMARL in three super hard scenarios of SMAC. The results are shown below:\\n\\n\\n|\\u00a0 Method \\u00a0 | 6h\\\\_vs\\\\_8z\\u00a0 |corridor\\u00a0 |3s5z\\\\_vs\\\\_3s6z\\u00a0 |\\n|---------------------------------------------|-------------------|------------------|------------------|\\n|WMAD (PyMARL) | 0.79 $\\\\pm$ 0.07 |0.93 $\\\\pm$ 0.05 |0.83 $\\\\pm$ 0.08|\\n|WMAD (PyMARL2) | 0.85 $\\\\pm$ 0.03 |0.90 $\\\\pm$ 0.03 |0.87 $\\\\pm$ 0.04 |\\n\\nWe note that under the hyperparameter settings in PyMARL, our method achieves similar performance to that implemented with PyMARL2. This phenomenon demonstrates that our method is not overly sensitive to the hyperparameter values because of throughout exploration.\", \"q1\": \"I hope ... training costs.\\n\\nOur WMAD and the baseline methods including MAVEN, QTRAN, EOI, SCDS, PMIC, and FoX are based on the framework of QMIX. Additionally, MAVEN introduces a GRU unit and a discriminator that consists of a two-layer MLP. EOI learns an additional discriminator that consists of a two-layer MLP. Similar to EOI, SCDS also needs to learn a trajectory discriminator. PMIC introduces a Dual Mutual Information Estimator that includes a state encoder, an action encoder, and an action prediction network. LIPO is based on MAPPO and additionally introduces a variational trajectory discriminator for each agent to maximize the mutual information objective. Our method learns a trajectory encoder that consists of a two-layer MLP with a hidden size of 64 for the encoder and a GRU unit for the autoregressive model. We also adopt a dual vector with a dimension $m$ of 64 to parameterize the dual function. Note that our method has a comparable number of parameters to other methods but achieves significant performance improvement over baseline methods. Since all methods are evaluated under the same computational platform, we compare the training costs of various methods based on the training time (5 million steps in the corridor scenario of SMAC) that are shown below:\\n\\n|\\u00a0 Method \\u00a0 | Training time\\u00a0 |\\n|---------------------------------------------|-------------------|\\n|QMIX | 7h 17m 30s |\\n|MAVEN | 7h 23m 16s |\\n|QTRAN | 8h 10m 28s |\\n|EOI | 7h 46m 35s |\\n|SCDS | 8h 23m 14s |\\n|PMIC | 8h 59m 52s |\\n|FOX | 7h 9m 52s |\\n|LIPO\\u00a0 | 9h 10m 9s |\\n|WMAD (Ours)\\u00a0 | 7h 35m 47s |\\u00a0\\n\\nWe note that our method requires relatively less training time compared to the baseline methods.\\n\\n\\nWe hope that the responses provided above have addressed your concerns. We would be grateful for your feedback.\"}",
"{\"comment\": \"Thank you for your response. Your reply has addressed some of my concerns. I particularly appreciate the addition of comparisons with methods such as Dico, RODE, and ADMN. However, I am not entirely satisfied with the rebuttal regarding W1. Additionally, could the authors provide some experimental details comparing with DiCo? It would be even better if the code (including the SMAC environment and the algorithm itself) could be shared via an anonymized link. Overall, I have decided to increase my score to 5\\uff08The highest I can give currently).\"}",
"{\"comment\": \"We greatly appreciate your suggestions and look forward to your feedback.\"}",
"{\"comment\": \"I've read other reviewers' comments and the rebuttals, and I really appreciate the authors for their efforts. The experiment results of WMAD are convincing, and it seems the reviewers agree that the main issue of WMAD is about the novelty. Replacing the MI-based metric with another metric is not a significant innovation, and the application of CPC in reinforcement learning has already been discussed in its original paper. WMAD combines these techniques together like building blocks but lacks new theoretical findings. Therefore, I have to keep my score.\"}",
"{\"summary\": \"In order to promote exploration in multi-agent reinforcement learning, the authors propose a method WMAD to maximize the difference in agents\\u2019 trajectories. The difference in trajectories is represented by Wasserstein distance, which is calculated with latent variables. Extensive experiments are conducted to show the superiority of WMAD in tasks including Pac-Men and SMAC.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors propose SMAC to promote exploration by using Wasserstein distance to evaluate the difference in trajectories of different agents, which is more reasonable than mutual information.\\n2. Experimental results show that the proposed algorithm WMAD is much better than baseline algorithms.\", \"weaknesses\": \"1. Although the use of Wasserstein distance is better than mutual information, it seems that the idea of using Wasserstein distance to enhance the difference in trajectories of agents has been proposed, such as \\u201cControlling Behavioral Diversity in Multi-Agent Reinforcement Learning\\u201d.\\n2. The authors claim that the proposed algorithm WMAD achieve SOTA with better exploration, while the baseline algorithms in experiments are not specifically designed for exploration. Baselines are fundamental MARL algorithms and mutual information-based exploration algorithms. Other kinds of exploration methods are missing, such as \\u201cEpisodic multi-agent reinforcement learning with curiosity-driven exploration\\u201d.\\n3. It seems that the results of baselines are much worse than those in original papers, such as MAVEN in 6h_vs_8z (super hard) and corridor (super hard).\", \"questions\": \"See the weaknesses. Look forward to more explanation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thanks for your feedback. We respond to your concerns below:\", \"weakness1\": \"The ... adaptability.\\n\\nIn our paper, we use the Wasserstein distance to encourage sufficient exploration and simply adopt the Euclidean distance as the cost function as in many prior works. We may also use cosine similarity as the cost function, which measures the direction differences between data points. We test the cosine similarity in Pac-Men, where agents require to move in different directions.\\u00a0 The results are shown below:\\n\\n|\\u00a0 Method \\u00a0 | Pac-Men\\u00a0 |\\n|---------------------------------------------|-------------------|\\n|WMAD (Cosine Similarity) | 94 $\\\\pm$ 0.05 |\\n|WMAD (Euclidean distance) | 87 $\\\\pm$ 0.03 |\\n\\nWe note that the Wasserstein distance based on the cosine similarity achieves higher rewards. In our work, we do not specifically discuss different cost functions and use the default Euclidean distance because we want to be consistent with prior works using the Wasserstein distance to ensure a fair comparison. We have added this analysis to our paper.\", \"weakness_2\": \"Although ... not scalable.\\n\\nOur method is scalable, as demonstrated in the last paragraph of Appendix G.2. We evaluate our method in scenarios of SMACv2 with an increasing number of agents. Our method achieves satisfactory performance and scales well across all scenarios. Moreover, our method would not cost high computational resources. We provide comparisons of training time (5 million steps in the corridor scenario of SMAC) of our method against baselines in the table below:\\n\\n|\\u00a0 Method \\u00a0 | Training time\\u00a0 |\\n|---------------------------------------------|-------------------|\\n|QMIX | 7h 17m 30s |\\n|MAVEN | 7h 23m 16s |\\n|QTRAN | 8h 10m 28s |\\n|EOI | 7h 46m 35s |\\n|SCDS | 8h 23m 14s |\\n|PMIC | 8h 59m 52s |\\n|FOX | 7h 9m 52s |\\n|LIPO\\u00a0 | 9h 10m 9s |\\n|WMAD (Ours)\\u00a0 | 7h 35m 47s |\\u00a0\\n\\nOur method consumes relatively less training time compared to the baseline methods.\", \"weakness_3\": \"The ... generalizability.\\n\\nWe use the Gaussian kernel by default in our paper. We may also use a linear kernel to parameterize dual functions. To evaluate the effectiveness of using the linear kernel for dual functions, we design a linear kernel variant and test it in the super hard scenarios of SMAC. The results are shown below:\\n\\n|\\u00a0 Method \\u00a0 | 6h\\\\_vs\\\\_8z\\u00a0 |corridor\\u00a0 |3s5z\\\\_vs\\\\_3s6z\\u00a0 |\\n|---------------------------------------------|-------------------|------------------|------------------|\\n|WMAD (Linear Kernel) | 0.57$\\\\pm$ 0.07 |0.39 $\\\\pm$ 0.05|0.32 $\\\\pm$ 0.03|\\n|WMAD (Ours) | 0.85 $\\\\pm$ 0.03 |0.90 $\\\\pm$ 0.03 |0.87 $\\\\pm$ 0.04 |\\n\\nWe note that using the linear kernel to parameterize dual functions leads to a significant performance drop. We suspect this is because the dual function may not be a linear function. Using the linear kernel constraints the representational ability of the dual function.\\n\\nThe values for the weight of the intrinsic reward $\\\\alpha$ in different scenarios are listed in Table 5 in our paper. To investigate the effect of different weights of the intrinsic reward, we test different weight values in the easy scenario 3s5z and the super hard scenario corridor. The results are shown in the table below:\\n\\n| Method\\u00a0 \\u00a0 | 3s5z ($\\\\alpha$=0.02) \\u00a0 \\u00a0 | 3s5z ($\\\\alpha$=0.05) \\u00a0 \\u00a0 | 3s5z ($\\\\alpha$=0.1) \\u00a0 \\u00a0 | corridor ($\\\\alpha$=0.02)\\u00a0 | corridor ($\\\\alpha$=0.05)\\u00a0 | corridor ($\\\\alpha$=0.1) \\u00a0 |\\n|--------------------|------------------|------------------|-----------------|------------------|------------------|------------------|\\n| WMAD \\u00a0 | 0.89 \\u00b1 0.03\\u00a0 \\u00a0 \\u00a0 | 0.91 \\u00b1 0.02\\u00a0 \\u00a0 \\u00a0 | 0.93 \\u00b1 0.03 \\u00a0 \\u00a0 | 0.82 \\u00b1 0.07\\u00a0 \\u00a0 \\u00a0 | 0.85 \\u00b1 0.04\\u00a0 \\u00a0 \\u00a0 | 0.81 \\u00b1 0.05\\u00a0 \\u00a0 \\u00a0 |\\n\\nThe results demonstrate that our method is not very sensitive to the values of the weight. Sub-optimal weights do not result in a significant performance drop even in the super hard scenario. We have added these discussions in our paper.\", \"q1\": \"Have ... the results?\\n\\nSee Weakness1;\", \"q2\": \"Although ... scalability?\\n\\nSee Weakness2;\", \"q3\": \"How ... impact?\\n\\nSee Weakness3;\", \"q4\": \"Since ... observations?\\n\\nWe may add Gaussian noise to the observations and test the robustness of our representation learning method. We evaluate such a method in three super hard scenarios of SMAC. The results are shown below:\\n\\n|\\u00a0 Method \\u00a0 | 6h\\\\_vs\\\\_8z\\u00a0 |corridor\\u00a0 |3s5z\\\\_vs\\\\_3s6z\\u00a0 |\\n|---------------------------------------------|-------------------|------------------|------------------|\\n|WMAD (Gaussian noise) | 0.83 $\\\\pm$ 0.05 |0.86 $\\\\pm$ 0.08 |0.81 $\\\\pm$ 0.09|\\n|WMAD (Ours) | 0.85 $\\\\pm$ 0.03 |0.90 $\\\\pm$ 0.03 |0.87 $\\\\pm$ 0.04 |\\n\\nWe note that our method remains robust to noise in the observations. Our proposed representation learning method is based on the next-step prediction, which is more robust to direct representation learning from raw observations.\"}",
"{\"comment\": \"Thank you for the in-depth review. Here are the responses to your questions and concerns:\", \"q1\": \"Although ... Learning\\u201d.\\n\\nOur contributions are different from prior works that use the Wasserstein distance. Prior works overlooked the problem of applying the Wasserstein distance to multi-agent settings, where the parameter-sharing policy network may lead to homogeneous policies. Such homogeneous policies disable the proper functioning of the Wasserstein distance. The Wasserstein distance between any two agents' trajectory distributions approaches zero, i.e., $W(X,Y)\\\\rightarrow 0$, where $X$ and $Y$ respectively represent the trajectory distributions of two agents. The ablation results show that simply using the Wasserstein distance without representation learning leads to a significant performance decline. To solve the problem, the main contribution of our method lies in representation learning with CPC as we discussed in the last paragraph of Section 1 in our paper. We consider a latent representation space to make the Wasserstein distance meaningful. We construct this representation space using the Contrastive Predictive Coding (CPC) method to learn distinguishable trajectory representations.\\u00a0\\n\\nMoreover, prior works do not consider the high computational cost caused by calculating the Wasserstein distance. We propose a novel kernel method to calculate the Wasserstein distance. Our method only needs to optimize two dual vectors, which significantly reduces the computational cost.\", \"q2\": \"The ... exploration\\u201d.\\n\\nWe compare our method with EMC proposed in \\\"Episodic multi-agent reinforcement learning with curiosity-driven exploration\\\" in three super hard scenarios of SMAC. The results are shown below:\\n\\n|\\u00a0 Method \\u00a0 | 6h\\\\_vs\\\\_8z\\u00a0 |corridor\\u00a0 |3s5z\\\\_vs\\\\_3s6z\\u00a0 |\\n|---------------------------------------------|-------------------|------------------|------------------|\\n|EMC | 0.37 $\\\\pm$ 0.05 |0.76 $\\\\pm$ 0.08 |0.73 $\\\\pm$ 0.04|\\n|WMAD (Ours) | 0.85 $\\\\pm$ 0.03 |0.90 $\\\\pm$ 0.03 |0.87 $\\\\pm$ 0.04 |\\n\\nOur method outperforms EMC, demonstrating the effectiveness of our Wasserstein distance-based exploration.\", \"q3\": \"It seems ... hard).\", \"the_performance_differences_of_baseline_methods_stem_from_several_reasons\": \"first, we use consistent hyperparameters and the same network structures for all baselines to ensure a fair comparison. The hyperparameters and network structures may differ from those used in the original papers; second, performance comparisons across different SMAC versions are not applicable. The settings of scenarios across different versions can be different.\\n\\n\\nWe hope the responses provided above have resolved your concerns. Your feedback would be greatly appreciated.\"}",
"{\"comment\": \"Our method adopts a trajectory representation learning technique using novel next-step prediction to solve an emergency problem, where the homogeneous trajectories may not enable the proper functioning of the Wasserstein distance. Instead, we enlarge the Wasserstein distance between trajectories in a latent space. As a result, by using our method, we do not need to use per-agent policy networks to introduce heterogeneous behaviors, unlike previous works such as CDS [1] and DiCo [2], which significantly reduce the number of parameters. This idea is novel and has never been proposed in previous works.\\n\\nOur statically reliable results, which are average returns of all algorithms in Pac-Men, SMAC, and SMACv2 along with the standard deviation over five random seeds, demonstrate the effectiveness and stability of the performance gains achieved by our method. \\n\\n[1] Li, Chenghao, et al. \\\"Celebrating diversity in shared multi-agent reinforcement learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 3991-4002.\\n\\n[2] Bettini M, Kortvelesy, et al. \\u201cControlling Behavioral Diversity in Multi-Agent Reinforcement Learning\\u201d International Conference on Machine Learning (2024)\"}",
"{\"comment\": \"Thank you for your response.\\n\\nWe first clarify the differences between our method and previous works such as [1] and [2] to demonstrate our contributions. \\n\\nThe authors in [1] propose controlling the heterogeneous policies via rescaling SND, which is measured by the Wasserstein distance. To produce heterogeneous policies, they introduce per-agent networks to learn diverse behaviors, which may lead to high computational cost and poor scalability. However, in our paper, we do not need additional per-agent networks, through contrastive representation learning, our method learns distinguishable trajectory representations that enable the Wasserstein distance to efficiently encourage multi-agent diversity. Moreover, we encourage extensive exploration. As we discussed in the response above, controling the diversity may not promote extensive exploration.\\n\\n\\nThe authors in [2] propose using the Wasserstein distance to measure the policy differences. Different from our contrastive representation learning, they use an encoder-decoder structure to learn latent representations of policies to standardize the action distributions of different agents. They ignore the problem that the Wasserstein distance with homogenous policies may not work properly. \\n\\nMoreover, as we discussed above, both works do not consider the high computational cost of the Wasserstein distance. In our method, we propose a novel kernel method to calculate the Wasserstein distance. Our method only needs to optimize two dual vectors, which significantly reduces the computational cost. \\n\\n\\nThe experiments on performance comparison between our method and DiCo follow the training details provided in Appendix H. We are happy to provide the code. However, we are not sure whether providing a link to our code violates the rule of Double-blind review. We need further confirmation from the Area Chair. \\n\\n\\n\\n\\n\\n[1] Bettini M, Shankar A, et al. \\u201cSystem neural diversity: measuring behavioral heterogeneity in multi-agent learning\\u201d[J]. arXiv preprint arXiv:2305.02128, 2023.\\n\\n[2] Hu T, Pu Z, Ai X, et al. \\u201cMeasuring Policy Distance for Multi-Agent Reinforcement Learning\\u201d International Conference on Autonomous Agents and Multiagent Systems (2024)\"}",
"{\"comment\": \"We sincerely appreciate your thorough review and valuable feedback on our manuscript. We answer your questions below:\", \"weakness_1\": \"The ... policy distributions.\\n\\nOur contributions are different from prior works that use the Wasserstein distance. Previous works overlook the problem of applying the Wasserstein distance in multi-agent settings, where the parameter-sharing policy network may result in homogeneous policies, hindering the proper functioning of the Wasserstein distance. The Wasserstein distance between any two agents' trajectory distributions approaches zero. The ablation results show that simply using the Wasserstein distance without representation learning leads to a significant performance decline. To solve the problem, the main contribution of our method lies in representation learning with CPC as we discussed in the last paragraph of Section 1 in our paper. We consider a latent representation space to make the Wasserstein distance meaningful.\\n\\nMoreover, prior works do not consider the high computational cost caused by calculating the Wasserstein distance. This is very important in multi-agent settings. High computational cost may lead to poor scalability. We propose a novel kernel method to calculate the Wasserstein distance. Our method only needs to optimize two dual vectors, which significantly reduces the computational cost.\", \"weakness_2\": \"The second ... ICLR community.\\n\\nWe think extensive exploration is better than limited exploration. Control the diversity may lead to limited exploration and does not necessarily lead to better performance. Moreover, our method can achieve satisfactory performance in scenarios requiring homogenous behaviors, as demonstrated by Appendix G.2, indicating that our method efficiently balances exploration and exploitation.\", \"weakness3\": \"Correspondingly, ... methods?\\n\\nWe compare our method with DiCo, RODE, and ADMN in three super hard scenarios of SMAC. For a fair comparison, we implement baseline methods with consistent hyperparameters and the same network structure. \\n\\n|\\u00a0 Method \\u00a0 | 6h\\\\_vs\\\\_8z\\u00a0 |corridor\\u00a0 |3s5z\\\\_vs\\\\_3s6z\\u00a0 |\\n|---------------------------------------------|-------------------|------------------|------------------|\\n| DiCo | 0.72 $\\\\pm$ 0.04 |0.81 $\\\\pm$ 0.03 |0.65 $\\\\pm$ 0.09 |\\n|RODE | 0.59 $\\\\pm$ 0.08 |0.68 $\\\\pm$ 0.08 |0.47 $\\\\pm$ 0.11|\\n|ADMN | 0.63 $\\\\pm$ 0.05 |0.74 $\\\\pm$ 0.06 |0.72 $\\\\pm$ 0.08|\\n|WMAD (Ours) | 0.85 $\\\\pm$ 0.03 |0.90 $\\\\pm$ 0.03 |0.87 $\\\\pm$ 0.04 |\\n\\nThe experimental results demonstrate the outperformance of our method compared to baseline methods.\", \"q1\": \"Can ... spaces?\\n\\nOur method can be used in multi-agent environments with continuous action spaces. We test our method in Cooperative Navigation, a multi-agent task with continuous action space, where agents learn to cooperatively cover all the landmarks while avoiding collisions.\\n\\n|\\u00a0 Method \\u00a0 | Average distance\\u00a0 |collisions |\\n|---------------------------------------------|-------------------|------------------|\\n|QMIX | 3.21 $\\\\pm$ 0.15 |1.39 $\\\\pm$ 0.27 |\\n|WMAD (Ours) | 2.17 $\\\\pm$ 0.11 |0.85 $\\\\pm$ 0.18 |\\n\\nCompared to QMIX, our method leads to smaller average distances and fewer collisions. \\n\\nWe evaluated our method in SMACv2 in our paper. SMACv2 is a benchmark with stochastic scenarios where agents have different action spaces.\", \"q2\": \"Regarding ... zero?\\n\\nWe first consider an ideal condition. Due to policy network parameter sharing, the action distributions output by the policy network follow the same distribution $p$. As a result, $W(p, p) =0$. However, in practice, during the exploration phase, some techniques such as $\\\\epsilon$-greedy have been used to improve the uncertainty in action selections. Thus, the distributions of different agents may not be identical. As a result, the Wasserstein distance approaches zero.\", \"q3\": \"Compared ... load?\\n\\nTo reduce the computational cost, our method uses a kernel method, which only needs to learn two dual vectors. We compare the training time (5 million steps) of QMIX, WMAD w/ kernel method, and WMAD w/ two-layer neural network in the corridor scenario of SMAC under the same computation platform. \\n\\n\\n|\\u00a0 Method \\u00a0 | Training time |\\n|---------------------------------------------|-------------------|\\n|QMIX | 7h 17m 30s |\\n|WMAD w/ kernel method | 7h 35m 47s |\\n|WMAD w/ two-layer neural network | 9h 29m 16s\\u00a0 |\\n\\nOur method does not cost much additional training time. However, using a two-layer neural network to parameterize the dual function leads to high computational cost.\", \"q4\": \"WMAD ... reward.\\n\\nThe Euclidean distance directly measures the distance between data points, which may lead to high variance. We design a variant using the Euclidean distance as intrinsic rewards and test it in the super hard 3s5z\\\\_vs\\\\_3s6z scenario. This variant achieves an even lower win rate of 0.17 $\\\\pm$ 0.09 compared to QMIX, which achieves a win rate of 0.36 $\\\\pm$ 0.12.\\n\\nWe hope to hear from you soon and thank you again for your review.\"}",
"{\"summary\": \"This paper focuses on the issue of diversity in cooperative multi-agent reinforcement learning (MARL). As parameter sharing in MARL often leads to homogeneous behaviors and limited exploration, some previous methods promote identity-aware multi-agent diversity by mutual information (MI). The authors point out the drawbacks of MI and replace it with Wasserstein Distance. The Wasserstein Multi-Agent Diversity (WMAD) uses the Wasserstein distance between the trajectory distributions of different agents as an intrinsic reward to facilitate exploration. The authors conducted experiments on Pac-Men, SMAC and SMACv2 to test the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors describe the framework and implementation of WMAD in detail, which makes the method easy to understand.\\n2. The motivation of this paper is very clear: First, promote multi-agent diversity for exploration; Second, improve previous mutual-information-based approaches.\\n3. The visualization of the visited area strongly demonstrates the effectiveness of WMAD in promoting diversity.\", \"weaknesses\": \"1. The novelty is relatively limited. As the authors mentioned, there are already many works about promoting diversity for enhanced exploration. WMAD follows them and replaces the metric of diversity with Wasserstein distance.\\n2. According to Figure 4(d), the diversity of agents' trajectories is improved significantly. However, how would WMAD perform in scenarios that require homogeneous behaviors (e.g., focus fire on the same enemy in SMAC)? I think the authors need to include results or discussion on WMAD's performance in such scenarios.\\n3. The experiment results in SMAC and SMACv2 are very significant. However, it is worth noting that the learning rate is set to 0.005 and the batch size is 128. The exploration rate is also tuned. These settings are proven to significantly improve the performance of QMIX in *pymarl2* [1]. Therefore, I wonder how the other baselines are implemented, and I'm concerned about the fairness of the experiments. Maybe the authors could clarify the implementation and hyperparameter settings of other baselines. Furthermore, It would be better to provide the results of WMAD under the hyperparameter settings in *pymarl* and *pymarl2*, respectively.\\n\\n[1] Hu J, Jiang S, Harding S A, et al. Rethinking the implementation tricks and monotonicity constraint in cooperative multi-agent reinforcement learning[J]. arXiv preprint arXiv:2102.03479, 2021.\", \"questions\": \"1. I hope the authors can provide comparison results between WMAD and baselines in terms of the number of parameters and training costs.\\n2. Please see Weaknesses 3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
1EnpStvBU8 | Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models | [
"Gen Luo",
"Yiyi Zhou",
"Yuxin Zhang",
"Xiawu Zheng",
"Xiaoshuai Sun",
"Rongrong Ji"
] | In existing multimodal large language models (MLLMs), image resolution plays a significant role for granular visual recognition. However, directly increasing image resolution leads to expensive computational cost for MLLMs. In this paper, we reveal that a combination of low- and high-resolution visual features can efficiently mitigate this shortcoming. Based on this principle, we propose a novel and efficient method for MLLMs, termed Mixture-of-Resolution Adaptation (MRA). In particular, MRA adopts two visual pathways for images of different resolutions, where high-resolution visual information is embedded into the low-resolution pathway via the novel mixture-of-resolution adapters (MR-Adapters). This design also greatly reduces the input sequence length of MLLMs. To validate MRA, we apply it to a recent MLLM called LLaVA, and term the new model LLaVA-HR. We conduct extensive experiments on 17 vision-language (VL) tasks, which show that LLaVA-HR outperforms existing MLLMs on 15 VL tasks, e.g., +5.2\% on TextVQA. More importantly, both training and inference of LLaVA-HR remain efficient with MRA, e.g., 20 training hours and faster inference speed than LLaVA-NeXT. Source codes are released at: https://github.com/luogen1996/LLaVA-HR. | [
"high-resolution adaptation",
"multimodal large language models"
] | Accept (Poster) | https://openreview.net/pdf?id=1EnpStvBU8 | https://openreview.net/forum?id=1EnpStvBU8 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yVd0AHKoo8",
"x4lB4vKXiK",
"rqF1j0d72O",
"quwZNQOBib",
"mMHrtycikd",
"mLKH2p7R6K",
"lDPVlbds2H",
"ieJdpD1iEg",
"hjQJaeiI7B",
"gcYHRauKr9",
"ckhqn4hy1H",
"cdRKtkP6Ns",
"c1oqVBZmm8",
"bE66ZDGrxF",
"agYp4NDWQC",
"XSOaHNqIa6",
"XEvJhqxUrr",
"Tzfdjd3Q3c",
"TilOVZs3LH",
"OQSSmT5HOt",
"NDBJAjMbqh",
"M53SECk0So",
"JaPGsjtxaO",
"I61bq8FGUD",
"FxDVard2bM",
"Bgv5mQXtIq",
"BGcdyhqGmc",
"8r6LseliCT",
"8kVp6kO2xa",
"8bkBOvPDmS",
"6yPK92gXB7",
"5oYPyLurcQ"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732534864048,
1733134446019,
1732368042437,
1732605305118,
1732367996072,
1732260340467,
1732538370299,
1732770204919,
1732534896382,
1730080794986,
1732260322856,
1732534083684,
1732539956896,
1732259811233,
1732594269486,
1732534883366,
1730100686924,
1732260430363,
1732539943428,
1737523481484,
1734744645498,
1730190355396,
1732259780414,
1732534069721,
1730711104712,
1732260372372,
1729487029574,
1732607089693,
1732591945736,
1732260005929,
1732538785429,
1732260022926
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_b6Tu"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_TWQ4"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_HwDF"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_Mfi4"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2025/Area_Chair_3G36"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_b6Tu"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_TWQ4"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_Yq9F"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_HwDF"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2025/Reviewer_Yq9F"
],
[
"ICLR.cc/2025/Conference/Submission2025/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Dear reviewer TWQ4,\\n\\nThanks again for your valuable time and insightful comments. As the deadline for the Author/Reviewer discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\\n\\nBest regards!\"}",
"{\"comment\": \"Dear Reviewer Mfi4,\\n\\nWe hope that our detailed rebuttal can address your concerns about this paper.\\n\\nAs the deadline is approaching, we are looking forward to your valuable feedback and also welcome any new questions you may have.\\n\\nThanks again for your time and efforts in reviewing this paper.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"comment\": \"Dear reviewer Mfi4,\\n\\nThanks again for your valuable time and insightful comments. As the deadline for the Author/Reviewer discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\\n\\nBest regards!\"}",
"{\"comment\": \"Thank you for the authors' response. Most of my concerns are addressed, and I have decided to increase my score.\"}",
"{\"comment\": \"Dear reviewer b6Tu,\\n\\nThanks again for your valuable time and insightful comments. As the deadline for the Author/Reviewer discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\\n\\nBest regards!\"}",
"{\"comment\": \"---\\n\\n\\n>**Comment#5: MR-Adapter Placement in ViT Architecture: figure 2 shows the MR-Adapter is applied starting from the second stage of the ViT architecture. Does this mean the initial stage of the ViT does not utilize high-resolution features? Clarifying this could help illustrate the feature extraction flow more clearly.**\\n\\n\\n\\n**Response**: Thanks for this professional comment. We think that MR-Adapter should not be inserted in earlier stages for two reasons:\\n\\n1. The early stages of ViT usually aim to encode low-level visual information, which is inefficient for grasping high-level semantic and fine-grained information from the features of ConvneXt.\\n2. Since the early stage of ViT has not yet extracted high-level semantics, the early fusion of ConvneXt may hurt the original feature semantics of ViT.\\n\\nIn Tab 3, we have already conducted detailed ablations to validate the insert position of MR-Adapter, which also confirms that the last 3 stages is the optimal choice.\\n\\n| Insert Pos | VQAv2 | TVQA | MME | PoPE |\\n| -------------- | -------- | -------- | -------- | -------- |\\n| last 3 stages | **81.8** | **64.4** | **1524** | **88.0** |\\n| last stage | 81.3 | 62.8 | 1513 | 87.2 |\\n| last 2 stages | 81.6 | 63.8 | 1508 | 87.5 |\\n| last 4 stages | 81.4 | 63.1 | 1461 | 87.5 |\\n\\n------\\n\\n\\n\\n>**Comment#6: Implementation of LLaVA-1.5-448: For LLaVA-1.5-448, only the image resolution is modified at the fine-tuning stage. Have you considered modifying the visual backbone from ViT-336 to ViT-448 and retraining it for both pre-training and fine-tuning? This comparison could provide insight into performance differences when using higher resolution throughout the model\\u2019s entire training process.**\\n\\n\\n\\n**Response**: We appreciate this constructive advice. In fact, jointly optimizing the visual encoder and randomly initialized MLP layers is technically tricky and requires carefully tuned learning rates and more training data. In particular, we follow QwenVL to use a learning rate of 2e-4 to tune the ViT-448 in the pre-training stage, and observe that its performance is similar to the frozen one. To this end, we think that the frozen visual encoder setting will be more simple and stronger in our experiments.\\n\\n| Model | Stage-1 | VQAv2 | TVQA | MME | PoPE |\\n| ------------- | ------- | ----- | ---- | ---- | ---- |\\n| LLaVA-1.5-448 | Fixed | 80.4 | 59.4 | 1461 | 86.2 |\\n| LLaVA-1.5-448 | Tuned | 80.4 | 59.2 | 1420 | 86.6 |\\n| LLaVA-HR-1024 | Fixed | 81.8 | 64.4 | 1524 | 88.0 |\\n\\n------\\n\\n>**Comment#7: Seed$^I$ Performance Comparison: Could you provide the Seed$^I$ performance for LLaVA-1.5, LLaVA-1.5-448, and LLaVA-NeXT? This metric would help evaluate relative image-processing capabilities across these models.**\\n\\n\\n\\n**Response**: Of course, we are glad to provide the Seed$^I$ performance of LLaVA-1.5, LLaVA-1.5-448, LLaVA-NeXT and LLaVA-HR in the table below, which also confirms the superior performance of LLaVA-HR. We believe these results do benefit our paper and will be added them to the final version.\\n\\n| Split | LLaVA-1.5 | LLaVA-1.5-448 | LLaVA-NeXT | LLaVA-HR-1024 |\\n| -------- | :-------: | :-----------: | :--------: | :-----------: |\\n| Seed | 58.6 | 63.8 | - | 64.2 |\\n| Seed$^I$ | 66.1 | 69.8 | 70.2 | 70.6 |\\n\\n\\n\\n------\"}",
"{\"comment\": \"Thanks for detailed reply. Most of my concerns have been resolved, and I recognize this work as a good one if the confusing graphs / words are modified or corrected. Thus, I think I will maintain my rating.\"}",
"{\"comment\": \"Dear reviewer Mfi4,\\n\\nWe fully understand your busyness, and also sincerely hope that our efforts would be recognized by you. To date, we have received four positive scores by other reviewers, one of whom raised their scores to positive after reading our response. Thus, we are still looking forward to your new decision based on our response. Thank you!\\n\\nBest regards!\"}",
"{\"comment\": \"Dear reviewer Yq9F,\\n\\nThanks again for your valuable time and insightful comments. As the deadline for the Author/Reviewer discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\\n\\nBest regards!\"}",
"{\"summary\": \"This paper focuses on the efficient high-resolution adaptation for multimodal large language models (MLLMs) and proposes a mixture-of-resolution adaptation (MRA) method for MLLMs. To be specific, the proposed MRA employs two visual pathways for images of different resolutions, where high-resolution visual information is embedded into the low-resolution pathway via the mixture-of-resolution adapters. Besides, the paper conducts extensive experiments to verify the effectiveness of the proposed model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper aims to explore the high-resolution adaptation for MLLMs, which is crucial and engaging.\\n2. The paper is well written and easy to follow.\\n3. The paper is well motivated and the proposed MRA appears reasonable.\", \"weaknesses\": \"1. As demonstrated in Table 1, it seems that there is no significant gap between \\u2018Avg. Pooling\\u2019 and the proposed MRA for the VQAv2 task, which is perplexing. The paper should explain the experimental phenomenon.\\n2. The paper should carry out a qualitative experiment between the proposed MRA and the model variant in Table 2.\\n3. The paper fails to clarify the version of LLaVA-1.5 used in Figure 4.\", \"questions\": \"As mentioned, in Table 1, it seems that there is no significant gap between \\u2018Avg. Pooling\\u2019 and the proposed MRA for the VQAv2 task, which is perplexing. The paper should explain the experimental phenomenon.\\n2. The paper should carry out a qualitative experiment between the proposed MRA and the model variant in Table 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"------\\n\\n>**Comment#1: LImited performance imprvement: The performance gains with MRA are modest. The low-resolution branch operates at 448\\u00d7448, so the appropriate baseline is LLaVA-1.5 with 448-pixel resizing. Compared to this baseline, the improvements MRA achieves are minimal (e.g., +0.7 on VQA v2, +31 on MME, and +0.8 on POPE). Training cost and inference speed are also similar between MRA and LLaVA-1.5-448, reducing the practical benefit.**\\n\\n\\n\\n**Response**: Thanks for this comment. We fully agree that LLaVA-1.5-448 should be a suitable baseline for LLaVA-HR-1024. As you said, with similar costs, LLaVA-HR can already achieve varying degrees of gain on low-resolution or medium-resolution benchmarks such as VQAv2 and MME. But as a high-resolution method, more gains of LLaVA-HR should be observed on high-resolution benchmarks such as TextVQA and DocVQA. To help you better understand our contribution, we provide an apple-to-apple comparison on these benchmarks in the table below, which shows the clear gains of LLaVA-HR-1024 over LLaVA-1.5-448.\\n\\n| Model | TVQA | DocVQA | InfoVQA | AI2D | ChartQA |\\n| ---------------- | -------- | -------- | -------- | -------- | -------- |\\n| LLaVA-1.5-7B-448 | 62.1 | 30.3 | 26.8 | 55.1 | 18.4 |\\n| LLaVA-HR-7B-1024 | **67.1** | **45.2** | **29.3** | **55.8** | **24.0** |\\n\\n------\\n\\n>**Comment#2: Limited novelty: the dual-pathway, high-and-low-resolution approach isn\\u2019t particularly new. Similar strategies have been explored in other works, such as Mini-Gemini and CogAgent, yet the authors do not compare their method with these models. Explicitly differentiating MRA from these approaches would help clarify its unique contributions.**\\n\\n\\n\\n\\n**Response**: Thanks for this kindly suggestion. We would like to recognize that Mini-Gemini, as the concurrent work to LLaVA-HR, do have the similar idea in dual visual pathways. However, in terms of micro designs, LLaVA-HR is quit different and efficient against Mini-Gemini. From the comparison in Tab 4, we can see that LLaVA-HR with 1,024 visual tokens can outperform MiniGemini with 2880 visual tokens on 5 of 6 benchmarks. \\n\\n\\n\\nCompared to CogAgent, LLaVA-HR still establishes advantage in simplicity and efficiency. For example, the high-resolution cross-module of CogAgent requires a large amount of data for pre-training, while our MRA does not. To further validate the benefit of LLaVA-HR, we would like to provide a relatively fair comparison on high-resolution benchmarks in the table below, where CogAgent uses much more training data. From this table, we also see the better performance of LLaVA-HR than CogAgent on 3 of 4 benchmarks.\\n\\n\\n\\nYour advice is highly beneficial to our paper, and all comparisons will be added in our final version.\\n\\n\\n\\n| Model | TVQA | DocVQA | InfoVQA | ChartQA |\\n| ------------------------- | ---- | ------ | ------- | ------- |\\n| CogAgent | 76.1 | 81.6 | 44.5 | 68.4 |\\n| LLaVA-HR-7B-1024$\\\\dagger$ | 73.8 | 85.8 | 52.3 | 77.6 |\\n\\n------\\n\\n\\n>**Comment#3: Limited generalizability: the authors apply MRA solely to LLaVA-1.5. Expanding the evaluation to other MLLMs, like Qwen-VL, would strengthen claims of the method\\u2019s generalizability across architectures.**\\n\\n\\n\\n\\n\\n**Response**: We fully respect to your concerns regarding the generalizability. However, LLaVA-based architecture has almost become the mainstream paradigm of existing MLLMs, thus LLaVA-1.5 may be the most representative and generalizable baseline. Based on your concerns, we have tried to combine MRA with LLaVA-NeXT, another reprehensive MLLM Architecture with dynamic high-resolution strategy. By adopting MRA to each dynamic patch for feature extraction, we observe additional gains on TVQA and PoPE. \\n\\n| Model | Res | VQAv2 | TVQA | MME | PoPE |\\n| --------------- | ---- | ----- | ---- | ---- | ---- |\\n| LLaVA-HR | 1024 | 81.9 | 67.1 | 1554 | 87.6 |\\n| LLaVA-NeXT | 1344 | 81.8 | 64.9 | 1519 | 86.5 |\\n| LLaVA-NeXT+MRA | 3072 | 81.9 | 70.9 | 1450 | 88.0 |\\n\\n------\\n\\n>**Comment#4: Clarification on Visual Encoder Notation: In line 206, it states that $\\\\mathcal{F_{I}}l$ and $\\\\mathcal{F_{I}}h$ are visual encoders for high- and low-resolution images, which seems to be a typo. The correct notation should reflect that $\\\\mathcal{F_{I}}h$ and $\\\\mathcal{F_{I}}l$ correspond specifically to low- and high-resolution encoders, respectively.**\\n\\n\\n\\n**Response**: Thanks for your careful review and we will revise these typos in our final version. In addition to your mentioned typos, we will carefully revise our paper to improve the readability.\\n\\n\\n\\n------\"}",
"{\"comment\": \"Dear reviewer Mfi4,\\n\\nWe are sorry that this message may bother you again. We sincerely hope that you could take your valuable time to read our response. Since the discussion deadline is already approaching, we are worry that there will not be enough time to address your further concerns.\\n\\nBest regards!\"}",
"{\"comment\": \"Thanks for your encouraging comment. We would like to appreciate again for your valuable suggestions.\"}",
"{\"comment\": \"> **Comment#5\\uff1a What do you mean by \\\"stages\\\" in vision transformers?**\\n\\n\\n\\n**Response:** We evenly divide the ViT layer into four stages, each of which receives the features of ConvNeXt through MR-Adapter. Based on your comment, we will add more explanations to our final revision.\\n\\n\\n\\n------\\n\\n> **Comment#6\\uff1a And, currently only final features from ConvNext is utilized, is there any experiments of multi-stage feature integration for that of CNN encoder?**\\n\\n**Response:** Yes, we have tried to fuse three-stage features of ConvNext into ViT, but performance gains are minor. To keep the simplicity of our design, we decide to use the final features of ConvNext in our experiments.\\n\\n| Res | CNN Features | VQAv2 | TVQA | MME | PoPE |\\n| ---- | ------------ | ----- | ---- | ---- | ---- |\\n| 768 | Final stage | 81.8 | 64.3 | 1524 | 88.0 |\\n| 768 | Three stages | 81.8 | 64.6 | 1480 | 88.1 |\\n\\n------\"}",
"{\"comment\": \"Thank you for your support! We are pleased to address your concerns and greatly appreciate your effort in helping us to strengthen our work.\\n\\nBest regards!\"}",
"{\"comment\": \"Dear reviewer HwDF,\\n\\nThanks again for your valuable time and insightful comments. As the deadline for the Author/Reviewer discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\\n\\nBest regards!\"}",
"{\"summary\": \"This paper presents a new approach for efficient multimodal large language models (MLLMs) by addressing the high computational cost of processing high-resolution images. The authors introduce Mixture-of-Resolution Adaptation (MRA), a method that combines both low- and high-resolution visual features to enhance model efficiency without compromising visual recognition quality. MRA uses two visual pathways: one for low-resolution and one for high-resolution images, with novel mixture-of-resolution adapters (MR-Adapters) that embed high-resolution information into the low-resolution pathway. This design significantly reduces input sequence length and computational load.\\n\\nThe authors apply MRA to the LLaVA model, resulting in an improved version called LLaVA-HR, which demonstrates superior performance across 15 out of 17 vision-language (VL) tasks, including a 5.2% increase in accuracy on TextVQA. Furthermore, LLaVA-HR maintains efficient training and inference times, showing improvements over LLaVA-NeXT.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n\\n2. Figures 2 and 3 are effectively designed and enhance understanding of the framework.\\n\\n3. The ablation study is solid to reveal the contribution of component.\", \"weaknesses\": \"> ### 1. LImited performance imprvement.\\n\\nThe performance gains with MRA are modest. The low-resolution branch operates at 448\\u00d7448, so the appropriate baseline is LLaVA-1.5 with 448-pixel resizing. Compared to this baseline, the improvements MRA achieves are minimal (e.g., +0.7 on VQA v2, +31 on MME, and +0.8 on POPE). Training cost and inference speed are also similar between MRA and LLaVA-1.5-448, reducing the practical benefit.\\n\\n> ### 2. Limited novelty\\n\\nThe dual-pathway, high-and-low-resolution approach isn\\u2019t particularly new. Similar strategies have been explored in other works, such as Mini-Gemini and CogAgent, yet the authors do not compare their method with these models. Explicitly differentiating MRA from these approaches would help clarify its unique contributions.\\n\\n> ### 3. Limited generalizability\\n\\nThe authors apply MRA solely to LLaVA-1.5. Expanding the evaluation to other MLLMs, like Qwen-VL, would strengthen claims of the method\\u2019s generalizability across architectures.\\n\\n\\n[1] CogAgent: A Visual Language Model for GUI Agents\\n[2] Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models\", \"questions\": \"> ### 1. Clarification on Visual Encoder Notation\\n\\nIn line 206, it states that $F_{I_l}$ and $F_{I_h}$ are visual encoders for high- and low-resolution images, which seems to be a typo. The correct notation should reflect that $F_{I_l}$ and $F_{I_h}$ correspond specifically to low- and high-resolution encoders, respectively.\\n\\n> ### 2. MR-Adapter Placement in ViT Architecture\\n\\nFigure 2 shows the MR-Adapter is applied starting from the second stage of the ViT architecture. Does this mean the initial stage of the ViT does not utilize high-resolution features? Clarifying this could help illustrate the feature extraction flow more clearly.\\n\\n> ### 3. Implementation of LLaVA-1.5-448\\n\\nFor LLaVA-1.5-448, only the image resolution is modified at the fine-tuning stage. Have you considered modifying the visual backbone from ViT-336 to ViT-448 and retraining it for both pre-training and fine-tuning? This comparison could provide insight into performance differences when using higher resolution throughout the model\\u2019s entire training process.\\n\\n> ### 4. $SEED^{img}$ Performance Comparison\\n\\nCould you provide the $SEED^{img}$ performance for LLaVA-1.5, LLaVA-1.5-448, and LLaVA-NeXT? This metric would help evaluate relative image-processing capabilities across these models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"------\\n\\n>**Comment#1** \\uff1a **The processing of both low-resolution and high-resolution images in the paper is mainly square-based, such as 448x448 and 1024x1024. Is there any adaptation mechanism for handling images with different aspect ratios? Would processing high-resolution images in a way that matches the input image's aspect ratio lead to better performance?**\\n\\n\\n\\n**Response**: We appreciate for this professional comment. In practice, we have already preserved the aspect ratio of the image and padded the short sides with zeros. Nevertheless, your advice also inspires us to combine MRA with existing dynamic high-resolution methods [A]. By doing so, MRA not only achieves adaptation to arbitrary aspect ratios, but also further increases the resolution to 3k. As you expected, the model performance is further improved on TVQA and PoPE.\\n\\n| Model | Res | VQAv2 | TVQA | MME | PoPE |\\n| --------------- | ---- | ----- | ---- | ---- | ---- |\\n| LLaVA-HR | 1024 | 81.9 | 67.1 | 1554 | 87.6 |\\n| LLaVA-HR+DyRes | 3072 | 81.9 | 70.9 | 1450 | 88.0 |\\n\\n[A] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Improved reasoning, ocr, and world knowledge, January 2024a \\n\\n\\n\\n------\\n\\n>**Comment#2 \\uff1aFor high-resolution image inputs, we are more focused on improvements in OCR-related tasks. The results for OCRVQA in Table 5 don\\u2019t seem to be the best. Additionally, Table 6 only presents results for LLaVA-HR+, but it lacks results for LLaVA-HR-7B, LLaVA-HR-13B, and LLaVA-HR-X with less training data. It would be helpful to include these results to better illustrate the impact of MRA on OCR-related tasks.**\\n\\n\\n\\n**Response**: Thanks for your careful review. In Table 5, Qwen-VL uses much more OCR-related data than LLaVA-HR, so it performs slightly better on OCRVQA, i.e., +1.5%. For this reason, we make a more fair comparison in Tab 6, where LLaVA-HR uses similar or fewer data than existing methods. In Tab 6, even with less model size and training data, LLaVA-HR still outperforms existing methods like DocOwl-1.5-Chat on all OCR-related tasks. \\n\\nMoreover, we also fully agree that the LLaVA-HR with less data should also be compared with existing methods on these OCR-related data. Thus, we provide the apple-to-apple comparison between LLaVA-1.5 and LLaVA-HR in the table below, where the training data and the LLM are kept the same. From this table, the benefit of MRA can still be observed on all OCR-related tasks. We will update these results in our final revision.\\n\\n\\n\\n| Model | TVQA | DocVQA | InfoVQA | AI2D | ChartQA |\\n| ------------------ | :------: | :------: | :------: | :------: | :------: |\\n| LLaVA-1.5-7B | 58.2 | 28.1 | 25.6 | 55.2 | 18.2 |\\n| **LLaVA-HR-7B** | **67.1** | **45.2** | **29.3** | **55.8** | **24.0** |\\n| LLaVA-1.5-13B | 61.3 | 30.2 | 29.3 | 59.2 | 18.2 |\\n| **LLaVA-HR-X-14B** | **70.9** | **52.5** | **34.5** | **59.7** | **27.6** |\\n\\n------\\n\\n>**Comment#3\\uff1aCould the authors further explain why the MR-Adapter is inserted in the last 3 stages? What is the design principle behind this decision? Could it be inserted in the earlier stages instead?**\\n\\n\\n\\n**Response**: Great question! We think that MR-Adapter should not be inserted in earlier stages for two reasons:\\n\\n1. The early stages of ViT usually aim to encode low-level visual information, which is inefficient for grasping high-level semantic and fine-grained information from the features of ConvneXt.\\n2. Since the early stage of ViT has not yet extracted high-level semantics, the early fusion of ConvneXt may hurt the original feature semantics of ViT.\\n\\nIn Tab 3, we have already conducted detailed ablations to validate the insert position of MR-Adapter, which also confirms that the last 3 stages is the optimal choice.\\n\\n| Insert Pos | VQAv2 | TVQA | MME | PoPE |\\n| ------------------ | -------- | -------- | -------- | -------- |\\n| **last 3 stages** | **81.8** | **64.4** | **1524** | **88.0** |\\n| last stage | 81.3 | 62.8 | 1513 | 87.2 |\\n| last 2 stages | 81.6 | 63.8 | 1508 | 87.5 |\\n| last 4 stages | 81.4 | 63.1 | 1461 | 87.5 |\\n\\n------\"}",
"{\"comment\": \"Thanks for your encouraging comment. We would like to appreciate again for your valuable suggestions.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"metareview\": \"This paper proposes a mixture-of-resolution adaption method for multimodal large language model (MLLM). It consists of two visual pathways for images of different resolutions, ViT/ConveNext for low/high-resolution. The information from high-res input is adapted to low-res features by using a design of mixture-of-resolution adapter, which won't increase the token length when feeding to the LLMs, thus marginal computational overhead when using high-res image input. The experiments have shown the effectiveness and efficiency of the proposed method, and better performance than other MLLMs.\", \"the_contributions_include\": \"1) the method is somehow novel; 2) the investigated problem is important; 3) the paper is well written and easy to understand; 4) the experimental results look good. The main concerns are 1) limited novelty (Mfi4); 2) incremental improvement on some datasets; 3) limited generation on model architecture; 4) missing additional ablations/clarification. Although most of them are well resolved by the rebuttal, some are left, including the novelty given existing work, and generalization to other MLLM architecture (only on LLaVA series). The AC also has the same concerns on these. In addition, the AC is quite concerned on why using heterogeneous vision encoders instead of homogeneous ones, if the main motivation is on mixture of resolution. Resolution is one of fundamental problems in computer vision, but this paper doesn't seem to dive deep enough. For example, ViT is not scale invariant, and simply increasing the resolution to too large one will definitely decreases the performance. Those naive resizing experiments without looking into ViT pretraining don't make much sense to me. In addition, the AC found many of the experimental details are missing. For example, what's \\\"Resamper\\\" in Table 1; why Resamper produces 64 tokens but compared with MRA with 576 tokens; how do you \\\"+ConvNeXT\\\" in the response to b6Tu's comment #1? Without the details, it is difficult to understand the values of those numbers.\\n\\nHowever, most of the reviewers are happy with this submission and rebuttal. The AC is OK to accept it. But the authors are strongly recommended to fix those issues and make the paper more clear.\", \"additional_comments_on_reviewer_discussion\": \"TWQ4 asked for some clarification and experiments, and was happy with the rebuttal and no change on the score.\\nFor b6Tu, the main concerns are comparison fairness by adding ConvNext visual encoder; missing details; overall contribution is not enough. They were resolved by the rebuttal and the reviewer increased the score to 6.\\nFor Mfi4, the main concerns are incremental improvement; limited novelty; limited generalization; missing clarification; missing experiments. Although Mfi4 didn't check in after the rebuttal, the AC thinks the generalization concern is not fully resolved, because the additional experiments are still on LLaVA series.\\nHwDF was concerned on small improvement on VQAv2 and missing some visualization/clarification, and happy in the end, and maintain original score.\\nYq9F was concerned on missing experiments; OCRVQA result; missing clarification, but happy with the rebuttal and maintained original score.\"}",
"{\"summary\": \"In this paper, the authors propose the Mixture-of-Resolution Adaptation method to embed the high-resolution features into the low-resolution pathway. The MRA enhances the visual perception ability in MLLMs, and allow them to benefit from high-resolution visual inputs with reduced computational cost. Extensive experiments demonstrate the effectiveness of the MRA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The comparison of MRA and other high-resolution adaptation solutions is clear, highlighting the effectiveness of the dual visual pathways.\\n3. The experiments are well-conducted and quite comprehensive.\\n4. The study demonstrates strong performance on most datasets compared with other MLLMs.\", \"weaknesses\": \"1. In Table 1, the MRA is compared to other high-resolution adaptation methods that use a single visual pathway. However, the introduction of a new visual encoder in the MRA raises concerns about the fairness of this comparison. Could the authors provide a baseline that uses dual visual pathways without the MR-Adapter?\\n2. The analyses of the MRA\\u2019s architecture and design details are insufficient, particularly regarding $\\\\mathcal{F}_l$, $\\\\mathcal{F}_h$, and the gate function. Could the authors provide ablation studies on these components?\\n3. The main novelty of the paper appears to be the Mixture-of-Resolution Adapter. While the application of dual visual pathways for high-resolution adaptation in MLLMs is innovative, the overall contribution of the paper seems somewhat insufficient. If MR-Adapter could integrate a wider variety of low- and high- resolution visual encoders, its contribution would be significantly enhanced.\", \"questions\": \"1. There are several micro-designs in the Mixture-of-Resolution Adapter, including $\\\\mathcal{F}_l$, $\\\\mathcal{F}_h$, and the gate function. Why do we choose a conv layer for $\\\\mathcal{F}_l$, an MLP layer for $\\\\mathcal{F}_h$? Are these layers and functions necessary? Please provide some analyses.\\n\\n2. In the Mixture-of-Resolution Adapter, the authors choose the addition operation to fuse features of different resolutions. (Deformable) Cross Attention is also an option. I wonder which method is better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"------\\n> **Comment#1\\uff1a In section4.3 (line 258), the statement, global average pooling is confusion, is the features are pooled into 1 global token? If so, it seems to be not consistent with figures. Please clarify the exact dimensions of fv after global average pooling.**\\n\\n\\n\\n**Response:** We feel sorry for any confusion of Fig 3, which does not detail the pooling operation. Actually, global average pooling is only used to compute the gating vector $g\\\\in d$. In this process, the concatenated visual features from dual pathways are globally pooled to a vector $f_v \\\\in \\\\mathbb{R}^{2d}$, and produce the gating vector via Eq. 4. We will add more details in Fig 3 to improve its readability.\\n\\n------\\n\\n\\n\\n> **Comment#2\\uff1a In Table 1, resizing LLaVA-1.5 to 672 pix achieves close performance with 768pix version of LLaVA-HR, is there a direct comparison between 768-pix version of them?**\\n\\n\\n\\n**Response:** Yes, we are glad to provide the 756-pix version of LLaVA-1.5 for comparison. Note that 768 pixels does not evenly divide the stride of ViT ($14 \\\\times 14$), so we choose 756 pixels for better performance.\\n\\n| Model | Res | V-token | VQAv2 | TVQA | MME | Speed |\\n| ------------ | -------- | -------- | -------- | -------- | -------- | ------------ |\\n| LLaVA-1.5 | 672 | 2304 | 81.5 | 64.2 | 1498 | 12.7 t/s |\\n| LLaVA-1.5 | 756 | 2916 | 81.0 | 63.2 | 1436 | 10.7 t/s |\\n| **LLaVA-HR** | **768** | **576** | **81.8** | **64.3** | **1524** | **23.5 t/s** |\\n| LLaVA-1.5 | 1022 | 5329 | 74.2 | 37.8 | 1266 | 5.6 t/s |\\n| **LLaVA-HR** | **1024** | **1024** | **81.9** | **67.1** | **1554** | **19.7 t/s** |\\n\\nAs shown in the table, a higher resolution of LLaVA-1.5 does not necessarily lead to higher performance, but it does incur a greater computational overhead. In stark comparison, LLaVA-HR achieves **2$\\\\times$ faster inference speed** and **higher performance**. We hope that these results can better help you understand our contribution.\\n\\n------\\n\\n\\n\\n\\n> **Comment#3\\uff1a In table 2, there is an ablation of \\\"tune vision\\\" referring to finetune vision encoder. However, I think the vision encoder in LLaVA-1.5 is fixed, can you provide a detailed description about this. For example, implementation and aim of tuning vision encoder.**\\n\\n\\n\\n**Response:** Thanks for your detailed review. Actually, fine-tuning the vision encoder can usually improve performance, especially as image resolution increases. **In this case, all of baselines in ablations (including LLaVA-1.5) adopt \\\"tune vision\\\" as the default setting, thus providing a fair and strong comparison with LLaVA-HR.** \\n\\n\\n\\nTo further address your concerns, we provide the ablation of \\\"tune vision\\\" for LLaVA-1.5 in the table below, which shows that stronger baseline performance can be obtained via \\\"tune vision\\\".\\n\\n| Model | Res | Vis. Enc. | VQAv2 | TVQA | MME | PoPE |\\n| ------------- | ------- | --------- | -------- | -------- | -------- | -------- |\\n| LLaVA-1.5 | 336 | Fixed | 78.5 | 58.6 | 1510 | 85.9 |\\n| **LLaVA-1.5** | **336** | **Tuned** | **80.4** | **59.4** | **1461** | **86.2** |\\n| LLaVA-1.5 | 448 | Fixed | 79.3 | 58.9 | 1480 | 86.7 |\\n| **LLaVA-1.5** | **448** | **Tuned** | **81.1** | **62.1** | **1493** | **87.2** |\\n\\n------\\n\\n\\n\\n> **Comment#4\\uff1a LLaVA-HR is proposed to process input resolution of 1024, what if input images larger than 1024. Is there any extended experiments for even larger images such as 4K ones.**\\n\\n\\n\\n**Response:** Thanks for this professional comment. Based on your suggestion, we consider two ways to further improve resolution for LLaVA-HR:\\n\\n- Resize: Directly resizing image to a larger resolution.\\n- DyRes: Combining with the dynamic high resolution from LLaVA-NeXT, and adopting our dual-pathway design to each dynamic patch.\\n\\nAs shown in the table below, the resolution of 1024 can already achieve promising results for most multimodal tasks. Moreover, LLaVA-HR can be seamlessly combined with the dynamic high-resolution strategy of LLaVA-NeXT to further boost performance on OCR-related tasks, i.e., +3% on TextVQA. \\n\\n| Model | Res | VQAv2 | TVQA | MME | PoPE |\\n| --------------- | ---- | ----- | ---- | ---- | ---- |\\n| LLaVA-HR | 1024 | 81.9 | 67.1 | 1554 | 87.6 |\\n| LLaVA-NeXT | 1344 | 81.8 | 64.9 | 1519 | 86.5 |\\n| LLaVA-HR+Resize | 1536 | 81.8 | 67.9 | 1493 | 87.7 |\\n| LLaVA-HR+DyRes | 3072 | 81.9 | 70.9 | 1450 | 88.0 |\\n\\n------\"}",
"{\"comment\": \"Dear reviewer b6Tu,\\n\\nWe are sorry that this message may bother you again. We sincerely hope that you could take your valuable time to read our response. Since the discussion deadline is already approaching, we are worry that there will not be enough time to address your further concerns.\\n\\nBest regards!\"}",
"{\"summary\": \"This paper aims to enhance MLLM by enlarging resolution of input images. By combining features from ViT and a CNN encoder through an adapter, performances of MLLM are improved a lot. Meanwhile, fusing high-resolution features from convolution-based encoder into low-resolution features from transformer-based encoder does not increase vision tokens to LLM decoder, so that additional computational cost is low. Proposed LLaVA-HR increases effective resolution for MLLM to 1024 and outperforms concurrent MLLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work proposed a novel method to increase resolutions of MLLMs, which is an important problem in the field and critical in fine-grained vision tasks. Without large modification of training recipe and computational cost of its baseline, LLaVA-1.5.\\nEvalutions are conducted on many existing benchmarks and performance of LLaVA-HR is quite impressive. Besides, the computational cost involved is quite small compared with related works.\", \"weaknesses\": \"Please see as in questions.\", \"questions\": \"1. In section4.3(line 258), the statement, global average pooling is confusion, is the features are pooled into 1 global token? If so, it seems to be not consistent with figures. Please clarify the exact dimensions of fv after global average pooling.\\n2. In Table 1, resizing LLaVA-1.5 to 672 pix achieves close performance with 768pix version of LLaVA-HR, is there a direct comparision between 768-pix version of them?\\n3. In table 2, there is an ablation of \\\"tune vision\\\" referring to finetune vision encoder. However, I think the vision encoder in LLaVA-1.5 is fixed, can you provide a detailed description about this. For example, implementation and aim of tuning vision encoder.\\n4. LLaVA-HR is proposed to process input resolution of 1024, what if input images larger than 1024. Is there any extended experiments for even larger images such as 4K ones.\\n5. What do you mean by \\\"stages\\\" in vision transformers? And, currently only final features from ConvNext is utilized, is there any experiments of multi-stage feature integration for that of CNN encoder?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"------\\n\\n>**Comment#1 \\uff1aAs demonstrated in Table 1, it seems that there is no significant gap between \\u2018Avg. Pooling\\u2019 and the proposed MRA for the VQAv2 task, which is perplexing. The paper should explain the experimental phenomenon.**\\n\\n\\n\\n**Response**: Thanks for this comment. We would like to explain this from two aspects:\\n\\n1. **In a fair comparison setting, performance gains of MRA are indeed noticeable on VQAv2, i.e., +1.3% over \\u2018Avg. Pooling\\u2019.** In practice, improving VQAv2 performance to above 80 is quit challenging. To the best of our knowledge, PaLI-X (55B) achieves state-of-the-art VQA performance of 86.0, which is only 3.7% higher than our much smaller model (LLaVA-HR-X, 14B). \\n2. **Most images of VQAv2 are low- and middle-resolution ones, so the high-resolution benefit of MRA cannot be fully reflected in VQAv2.** This is also evidenced by the larger gains of MRA in more fine-grained benchmarks such as 7.5% on TextVQA.\\n\\nFollowing your suggestion, we will add these discussions in our final version.\\n\\n\\n\\n------\\n\\n>**Comment#2** \\uff1a **The paper should carry out a qualitative experiment between the proposed MRA and the model variant in Table 2.**\\n\\n\\n\\n**Response**: Thanks for this constructive suggestion. We fully agree your advice and have provided several visualization examples in our appendix. We believe that these comparisons do contribute to the understanding of our paper.\\n\\n\\n\\n------\\n\\n>**Comment#3\\uff1a The paper fails to clarify the version of LLaVA-1.5 used in Figure 4.**\\n\\n\\n\\n**Response**: Thank you for your careful review. We apologize for the missing model details in Figure 4. In fact, we use LLaVA-1.5-13B from the official checkpoint for comparison, and its LLM is the same as LLaVA-HR-X. According to your advice, we will add more details in our final version.\\n\\n------\"}",
"{\"summary\": \"This paper introduces an novel high-resolution adaptation method for multimodal large language models (MLLMs), termed Mixture-of-Resolution Adaptation (MRA). MRA employs a dual visual pathway design to process high- and low-resolution images simultaneously from both macro and micro perspectives, while integrating high-resolution information into the low-resolution pathway through the Mixture-of-Resolution Adapter (MR-Adapter). This approach reduces the number of visual tokens while preserving rich visual semantics, significantly enhancing the model's visual descriptive power.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Unlike previous strategies that divide high-resolution images into sub-images, this paper introduces an innovative dual visual pathway structure, offering a fresh perspective for high-resolution adaptation. The MR-Adapter effectively embeds high-resolution information into the low-resolution pathway, introducing a new adaptation mechanism within the visual processing framework of MLLMs. This design overcomes the efficiency limitations of traditional high-resolution processing.\", \"The paper conducts extensive experiments across multiple vision-language tasks, providing a range of comparisons, with promising results.\", \"The writing is clear and easy to follow. It effectively highlights MRA's performance gains and efficiency advantages across different tasks, helping readers fully understand the model\\u2019s effectiveness and strengths.\"], \"weaknesses\": \"1. The processing of both low-resolution and high-resolution images in the paper is mainly square-based, such as 448x448 and 1024x1024. Is there any adaptation mechanism for handling images with different aspect ratios? Would processing high-resolution images in a way that matches the input image's aspect ratio lead to better performance?\\n\\n2. For high-resolution image inputs, we are more focused on improvements in OCR-related tasks. The results for OCRVQA in Table 5 don\\u2019t seem to be the best. Additionally, Table 6 only presents results for LLaVA-HR+, but it lacks results for LLaVA-HR-7B, LLaVA-HR-13B, and LLaVA-HR-X with less training data. It would be helpful to include these results to better illustrate the impact of MRA on OCR-related tasks.\\n\\n3. Could the authors further explain why the MR-Adapter is inserted in the last 3 stages? What is the design principle behind this decision? Could it be inserted in the earlier stages instead?\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your encouraging comment. We would like to appreciate again for your valuable suggestions, which play a crucial role in improving our work.\\n\\nBest regards!\"}",
"{\"comment\": \"I am sorry for the late response. The authors have addressed my concerns, and I maintain my positive score of this paper.\"}",
"{\"comment\": \"------\\n>**Comment#1**\\uff1a **In Table 1, the MRA is compared to other high-resolution adaptation methods that use a single visual pathway. However, the introduction of a new visual encoder in the MRA raises concerns about the fairness of this comparison. Could the authors provide a baseline that uses dual visual pathways without the MR-Adapter?** \\n\\n\\n\\n**Response:** Thanks for this suggestion. We fully respect your concerns and think that our dual-pathway designs including the MR-Adapter and the multi-resolution pathway (rather than the additional encoder) play crucial roles in LLaVA-HR. Therefore, we conduct additional ablations in the table below, which shows that **the gain of the additional visual encoder is minor if our designs are not used.** We hope these comparisons can further eliminate your confusion.\\n\\n| Model | VQAv2 | TVQA | MME | PoPE |\\n| --------- | ----- | ---- | ---- | ---- |\\n| LLaVA-1.5 | 80.4 | 59.4 | 1461 | 86.2 |\\n| +ConvNeXT | 80.4 | 59.6 | 1501 | 86.3 |\\n\\n------\\n\\n\\n>**Comment#2**\\uff1a **The analyses of the MRA\\u2019s architecture and design details are insufficient, particularly regarding $\\\\mathcal{F}_l$, $\\\\mathcal{F}_h$ and the gate function. Could the authors provide ablation studies on these components?** \\n\\n\\n\\n**Response:** Thanks for your detailed review. As discussed above, our main focus and contribution are the macro designs of MRA (the MR-Adapter and the multi-resolution pathway) , whose motivations and ablations are detailed in Sec 4.2-4.3 and Tab 1-2, respectively. As for the micro design of MRA, we aim to explore its optimal choice through empirical studies, and part results are already listed in Tab 3 (including the impact of $\\\\mathcal{F}_l$, $\\\\mathcal{F}_h$, fusion direction and insert position).\\n\\n\\n\\nTo further address your concerns, we provide more ablations of the gate function in the table below. As shown in the table, **their significance is far from the macro design of MRA, so we may lack detailed discussions due to page limitations**. Based on your suggestion, we will add these results in our final version.\\n\\n| $\\\\tau_h$ | $\\\\tau_l$ | VQAv2 | TVQA | MME | PoPE |\\n| --------- | ----- | ---- | ---- | ---- | ---- |\\n| **mlp** | **conv** | **81.8** | **64.4** | **1524** | **88.0** |\\n| conv | conv | 81.6 | 64.6 | 1499 | 87.7 |\\n| conv | mlp | 81.5 | 64.2 | 1517 | 87.6 |\\n| mlp | mlp | 81.5 | 64.1 | 1501 | 87.4 |\\n\\n| Gate Function | VQAv2 | TVQA | MME | PoPE |\\n| ----- | ---- | ---- | ---- | ---- |\\n| **tanh** | **81.8** | **64.4** | **1524** | **88.0** |\\n| sigmoid | 81.7 | 64.3 | 1567 | 86.9 |\\n| H-sigmoid | 81.6 | 64.4 | 1525 | 87.8 |\\n\\n------\\n\\n\\n>**Comment#3**\\uff1a**The main novelty of the paper appears to be the Mixture-of-Resolution Adapter. While the application of dual visual pathways for high-resolution adaptation in MLLMs is innovative, the overall contribution of the paper seems somewhat insufficient.**\\n\\n\\n\\n**Response:** Thanks for this comment. To help you better understand our innovations, we would like to highlight the contribution of our dual-pathway design from two aspects.\\n\\n1. **Design principle.** As discussed in **Comment#1**, directly combing two visual pathways does not lead to obvious performance gains in MLLMs. Therefore, **the main advantage of the dual-pathway comes from our design principle, which fully considers the visual complementarity of different encoders from the perspective of functionality and alignmen**t, as described in Sec 4.2. \\n2. **Technical details.** Previous works we compared in Table 4 (such as Sphinx) also mix multiple visual features, but their motivations and technical details are quit different. For example, Sphinx mixes visual embeddings for better representation, but still requires a dynamic high-resolution method for high-resolution encoding. **In contrast, we unify feature mixing and resolution enhancement into one dual-pathway design, greatly improving the efficiency.**\\n\\nOverall, we believe that the design principle and technical details of MRA will provide good hints for future work.\\n\\n------\"}",
"{\"comment\": \"The authors have addressed my concerns, and I maintain my positive score of this paper.\"}",
"{\"comment\": \"---\\n>**Comment#4\\uff1aIf MR-Adapter could integrate a wider variety of low- and high- resolution visual encoders, its contribution would be significantly enhanced.**\\n\\n\\n\\n**Response:** Thanks for this advice. We focus on high-resolution adaptation for MLLMs, thus two visual pathways can already efficiently achieve our target. However, MR-Adapter can also be directly extended to more visual encoders. To validate this, we conduct a toy experiment in the table below, which further fuses features of SigLip into the CLIP-ViT ones. From the table, experimental results also confirm the generalization ability of MR-Adapter.\\n\\n| Encoders | VQAv2 | TVQA | MME | PoPE |\\n| ----- | ---- | ---- | ---- | ---- |\\n| CLIP + ConvNext | 81.8 | 64.4 | **1524** | **88.0** |\\n| CLIP + ConvNext +SigLip | **82.0** | **65.5** | 1501 | 87.9 |\\n\\n------\\n\\n>**Comment#5 \\uff1aIn the Mixture-of-Resolution Adapter, the authors choose the addition operation to fuse features of different resolutions. (Deformable) Cross Attention is also an option. I wonder which method is better?**\\n\\n\\n\\n**Response:** Yes, cross attention is a viable choice for feature mixing. However, compared with MR-Adapter, cross attention requires longer training steps to converge and incurs more computational cost. Therefore, we adopt the simple yet effective design for MR-Adapter.\\n\\n| Fusion Module | VQAv2 | TVQA | MME | PoPE |\\n| ----- | ---- | ---- | ---- | ---- |\\n| MR-Adapter | **81.8** | **64.4** | **1524** | **88.0** |\\n| Cross Attention | 80.9 | 64.0 | 1483 | 87.5 |\\n\\n------\"}"
]
} |
1EJIax7ekV | Reinforcement Learning from Wild Animal Videos | [
"Elliot Chane-Sane",
"Constant Roux",
"Olivier Stasse",
"Nicolas Mansard"
] | We propose to learn legged robot locomotion skills by watching thousands of wild animal videos from the internet, such as those featured in nature documentaries. Indeed, such videos offer a rich and diverse collection of plausible motion examples, which could inform how robots should move. To achieve this, we introduce Reinforcement Learning from Wild Animal Videos (RLWAV), a method to ground these motions into physical robots. We first train a video classifier on a large-scale animal video dataset to recognize actions from RGB clips of animals in their natural habitats. We then train a multi-skill policy to control a robot in a physics simulator, using the classification score of a third-person camera capturing videos of the robot's movements as a reward for reinforcement learning. Finally, we directly transfer the learned policy to a real quadruped Solo. Remarkably, despite the extreme gap in both domain and embodiment between animals in the wild and robots, our approach enables the policy to learn diverse skills such as walking, jumping, and keeping still, without relying on reference trajectories nor hand-designed rewards. | [
"Legged Locomotion",
"Imitation Learning from Videos",
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=1EJIax7ekV | https://openreview.net/forum?id=1EJIax7ekV | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"s4X7YYeHbn",
"lWYEMWiRfB",
"l1Apu8Nkqd",
"g3cUgFjeA3",
"XW1NiLxfcW",
"VwWkWmdIRX",
"U9h3LcuT9v",
"RkPhQ0Xd2J",
"QsReglRrcO",
"L5vaDo09jO",
"Fc1ME88Ml3",
"DkqiYtD1vW",
"8JPbMZpVXK",
"75wWwK69uV",
"4y85LKnDIo",
"0T9te1DfXZ"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1732532151326,
1732727411000,
1732229784904,
1732532131468,
1732580745797,
1732229494718,
1732229588961,
1734793296424,
1732532179601,
1732818913676,
1732733880453,
1732613344795,
1737524127224,
1730726230646,
1730705829387,
1730852120907
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Reviewer_voL6"
],
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Area_Chair_PqzQ"
],
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11494/Reviewer_BwRh"
],
[
"ICLR.cc/2025/Conference/Submission11494/Reviewer_zazf"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11494/Reviewer_zazf"
],
[
"ICLR.cc/2025/Conference/Submission11494/Reviewer_voL6"
],
[
"ICLR.cc/2025/Conference/Submission11494/Reviewer_BwRh"
]
],
"structured_content_str": [
"{\"comment\": \"We hope that our new experiments and explanations addressed your concerns and are eager to hear your opinion before the end of the rebuttal.\"}",
"{\"comment\": \"We appreciate your careful consideration of cross-embodiment learning but firmly disagree with your assessment.\\n\\nCross-embodiment imitation involves \\\"agents learn policies from videos of other agents demonstrating the same task, but with stark differences in their embodiments\\\" [1]. Our results demonstrate an unprecedented case of this, as shown in [this video capsule](https://www.dropbox.com/scl/fi/3pa6xko9qfdt7nsxl1x29/Rebuttal_2.mp4?rlkey=04k1kox2vq0hgoehkzj30nvnw&st=wocetxku&dl=0), with emergent skills arising solely from the transfer of animal video behaviors by design.\\n\\nNonetheless, we thank the reviewer for the time and effort dedicated to reviewing our paper.\\n\\n---\\n\\n1. XIRL: Cross-embodiment Inverse Reinforcement Learning, 2021\"}",
"{\"comment\": \"Thank you for your helpful feedback.\\n\\nWe have prepared a short [Rebuttal_Video.mp4](https://www.dropbox.com/scl/fi/60glxqu1yjtlzusm8zv6r/Rebuttal_Video.mp4?rlkey=y1dx11pbbwvenst18a2sgm32k&st=72svyiw2&dl=0).\\nThis video aims to better illustrate our results, highlight the failure cases of our approach, and clarify the challenges of our problem setup. \\nWe kindly encourage you to watch it, as it may provide a fresh perspective on our work.\\n\\n---\\n\\nWe believe that the main contribution and significance of our paper may not have been fully recognized. Our work presents **the first successful demonstration of cross-embodiment visual imitation from a large, diverse and noisy internet dataset of animal video to physical robots**. The cross-embodiment capability arises from the diversity of animals in the dataset, making the video reward agnostic to the shape of the robot. This feature is uniquely enabled by using videos of animals in the wild, despite the underlying challenges discussed in the paper. These videos do not offer a proper physical grounding, which is then provided through reinforcement learning, leveraging a physics simulator and constraints that represent the physical limits of the robot. **Evidence of cross-embodiment is demonstrated in the experimental results**, such as the transfer of limb movements in the walking task. Our proof of concept underscores the vast potential of leveraging internet videos to advance locomotion capabilities.\\n\\nWe address the rest of your concerns below.\\n\\n---\\n\\n**Q. It seems the paper lacks comparison to some baseline or other works. For example, can we compare the results in sim w/ some hand crafted reward models? Then you can compare sample efficiency of the proposed method.**\\n\\n**A.** Regarding sample efficiency, we match the number of training epochs used in [1] while utilizing half the number of parallel environments in their setup for learning walking on flat ground with an Anymal robot. Manually designing rewards for each skill, such as in [1], would indeed outperform our approach which does not have direct access to ground truth rewards. Please note that tuning such ground truth reward functions for new skills is not trivial and can be time-consuming. However, the goal and contribution of our work are fundamentally different. We demonstrate that it is possible to transfer locomotion skills from a large, diverse, and noisy dataset of wild animal videos to physical robots. While our results are currently limited to simple skills, **we argue that our proof of concept highlights the potential of systematically leveraging video data for locomotion tasks, offering better scalability** compared to manual reward design. \\n\\n---\\n\\n**Q. Would like to know how large the animal dataset needs to be to make it work. This work uses 8.7K videos. Do we need more or it can work w/ less? Can we add an ablation on it?**\\n\\n**A.** Thank you for the suggestion. In Appendix A, Figure 6, we provide an ablation study on the impact of dataset size on the learned behaviors. The results show that our approach works with slightly less data (e.g., 50% of the full dataset), but reducing the dataset size too much (e.g., 25% or less) prevents some target skills from emerging.\\nThis underscores the importance of training the video-based reward on a sufficiently large and diverse dataset to ensure effective generalization to robot locomotion.\\n\\n---\\n\\n**Q. The paper claims that the reward model can help policy learn well in simulation and then successfully transfer to real. However, to me the \\\"transfer to real\\\" part seems orthogonal to the reward model itself. Could author explain why better reward model can lead to better sim-to-real transfer? For example, if we use a hand crafted reward function in the same setup and learn a policy in sim, can it also transfer to real? My impression is the answer should be yes.**\\n\\n**A.** We don\\u2019t make this claim in the paper. On the contrary, we argue that **transferring locomotion skills from animal videos to a real robot requires physical grounding**. This is addressed in Stage 2 of our approach, where we train a policy in a physics simulator using constrained RL. During this phase, the policy maximizes our video-based reward while adhering to behaviors that are physically plausible and transferable to a real robot. We actually believe that most rewards, including our complex video-based reward, could be physically grounded through our setup. We have updated the paper to better emphasize this physical grounding aspect. \\n\\n---\\n1. Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning, 2021\"}",
"{\"comment\": \"We hope that our new experiments and explanations addressed your concerns and are eager to hear your opinion before the end of the rebuttal.\"}",
"{\"comment\": \"Thanks, the reply addresses my concerns. I would like to raise the score to 6.\"}",
"{\"comment\": \"Thank you for your helpful feedback.\\n\\nWe have prepared a short [Rebuttal_Video.mp4](https://www.dropbox.com/scl/fi/60glxqu1yjtlzusm8zv6r/Rebuttal_Video.mp4?rlkey=y1dx11pbbwvenst18a2sgm32k&st=72svyiw2&dl=0).\\nThis video aims to better illustrate our results, highlight the failure cases of our approach, and clarify the challenges of our problem setup. \\nWe kindly encourage you to watch it, as it may provide a fresh perspective on our work.\\n\\n---\\n\\nWe believe that the main contribution and significance of our paper may not have been fully recognized. Our work presents **the first successful demonstration of cross-embodiment visual imitation from a large, diverse and noisy internet dataset of animal video to physical robots**. The cross-embodiment capability arises from the diversity of animals in the dataset, making the video reward agnostic to the shape of the robot. This feature is uniquely enabled by using videos of animals in the wild, despite the underlying challenges discussed in the paper. These videos do not offer a proper physical grounding, which is then provided through reinforcement learning, leveraging a physics simulator and constraints that represent the physical limits of the robot. **Evidence of cross-embodiment is demonstrated in the experimental results**, such as the transfer of limb movements in the walking task. Our proof of concept underscores the vast potential of leveraging internet videos to advance locomotion capabilities.\\n\\nWe address the rest of your concerns below.\\n\\n---\\n\\n**Q. Position of the paper is a bit misleading. It suggests that the reward function would come purely form videos. However, the approach uses a number of hand-designed reward terms such as air time, symmetry, and terminations.**\\n\\n**A.** In our paper, we argue that transferring locomotion skills from animal videos to a real robot requires physical grounding. This is addressed in Stage 2 of our approach, where a policy is trained in a physics simulator using constrained RL. During this phase, the policy maximizes the video-based reward while adhering to constraints that ensure physically plausible and transferable behaviors. These constraints are necessary for physical grounding, as many parameters\\u2014such as the torque limits of the Solo-12 robot\\u2014cannot be inferred from animal videos. During RL, the policy learns multiple skills simultaneously, but the constraints are applied uniformly across skills. Consequently, **the variations in the learned skills come purely from videos**. We have revised the paper to better highlight this critical aspect of physical grounding. Employing such constraints or similar penalties is a standard and essential practice in learning-based locomotion for real-world robots, regardless of the target locomotion skill, see [1, 2] for example. We anticipate that more advanced video models and more accurate physics simulators could eliminate the necessity for symmetry and air-time constraints.\\n\\n---\\n\\n**Q. The results are promising but overall limited. Looking at the supplementary materials video it looks like the learnt skills do not quite match the desired behaviors.**\\n\\n**A.** We acknowledge the limitations of our results. However, our proof of concept demonstrates the potential of leveraging large video datasets sourced from the internet for locomotion tasks. This problem is particularly challenging due to the inherent diversity and noisiness of internet videos.\\n- *Keeping Still*: While the robot's feet move slightly, it remains stationary without shifting its base. This aligns with many \\\"keeping still\\\" videos in the dataset, which often show animals exhibiting minimal in-place motion.\\n- *Running*: We agree the resulting motion resembles trotting rather than running. However, the robot demonstrates broader and faster movements compared to walking, reflecting an intermediate gait.\\n\\nWe refer to *Rebuttal_Video.mp4* for additional qualitative insights.\\n\\n---\\n\\n**Q. It would be good to ablate the impact of each of the reward terms.**\\n\\n**A.** We have added an ablation for the base orientation constraint around the roll axis and the foot air-time constraint in Appendix A Table 3. In our *Rebuttal_Video.mp4*, we show that without the air-time constraint, the robot performs motions visually close to walking but instead slips in place by generating high-frequency ground contacts, exploiting imperfections in the simulator. Regarding symmetry loss, our *Rebuttal_Video.mp4* illustrates failure cases when it is removed. Without symmetry, the robot may deviate from straight running, turning instead while maintaining trotting motions. Additionally, instead of walking, it may keep two feet on the ground while simulating running motions with only two legs, failing to move the body forward but still deceiving the video classifier \\n\\n---\\n\\n1. Not Only Rewards But Also Constraints: Applications on Legged Robot Locomotion, 2024\\n2. Extreme Parkour with Legged Robots, 2024\"}",
"{\"comment\": \"Thank you for your helpful feedback.\\n\\nWe have prepared a short [Rebuttal_Video.mp4](https://www.dropbox.com/scl/fi/60glxqu1yjtlzusm8zv6r/Rebuttal_Video.mp4?rlkey=y1dx11pbbwvenst18a2sgm32k&st=72svyiw2&dl=0).\\nThis video aims to better illustrate our results, highlight the failure cases of our approach, and clarify the challenges of our problem setup. \\nWe kindly encourage you to watch it, as it may provide a fresh perspective on our work.\\n\\n---\\n\\nWe believe that the main contribution and significance of our paper may not have been fully recognized. Our work presents **the first successful demonstration of cross-embodiment visual imitation from a large, diverse and noisy internet dataset of animal video to physical robots**. The cross-embodiment capability arises from the diversity of animals in the dataset, making the video reward agnostic to the shape of the robot. This feature is uniquely enabled by using videos of animals in the wild, despite the underlying challenges discussed in the paper. These videos do not offer a proper physical grounding, which is then provided through reinforcement learning, leveraging a physics simulator and constraints that represent the physical limits of the robot. **Evidence of cross-embodiment is demonstrated in the experimental results**, such as the transfer of limb movements in the walking task. Our proof of concept underscores the vast potential of leveraging internet videos to advance locomotion capabilities.\\n\\nWe address the rest of your concerns below.\\n\\n---\\n\\n**Q. Ablation study of the classifier training and cross-embodiment**\\n\\n**A.** The variations in the learned locomotion skills stem entirely from our video-based reward, which is trained exclusively on animal videos spanning hundreds of species, without any robot videos. There are no skill-specific rewards or reference trajectories, nor a transfer through a specific intermediate space (like configuration space, as used in other papers), making the emergent skills a direct result of cross-embodiment visual imitation. In Appendix A Figure 6, we present an ablation study on the impact of dataset size on the learned behaviors. The results show that significantly reducing the dataset size (e.g., to 25% or less) disrupts the proper emergence of locomotion skills. This underscores the importance of training on a sufficiently large and diverse dataset to ensure effective generalization to robot locomotion.\\n\\n---\\n\\n**Q. Classifier focusing primarily on the background and the animal's torso, neglecting the movement of the four legs / How would the method perform if the output criterion of the classifier is only the movement of the robot's center of mass?**\\n\\n**A.** We disagree that the classifier neglects leg movements: while jumping may primarily rely on torso movement, this does not apply to other skills. Indeed, the camera tracks the torso in xyz and around the yaw axis, meaning that distinguishing between keeping still and walking/running depends solely on leg movements. Moreover, *Rebuttal_Video.mp4* highlights a failure case where, when commanded to walk, the robot moves only its front right and hinge left legs with walking-like motions while keeping the other two legs idle, effectively deceiving the video classifier. This shows that recognizing leg movements is crucial for achieving walking and running. On the contrary, explicitly extracting specific criteria (such as the center of mass), would make it more difficult to use videos of animals from the wild and could hinder the transfer process. This is because the diverse and unstructured nature of the animal videos requires a more general approach to enable successful cross-embodiment transfer.\\n\\n---\\n\\n**Q. If we do not use the trained classifier as a rewarder, but instead manually assign simple rewards to encourage the quadruped robot to stay still, walk forward, or jump, how effectively can the robot learn these skills?**\\n\\n**A.** The tasks we considered in the paper are classical in legged locomotion, yet we are displaying a similar sample efficiency as other methods, such as [1] in their setup for learning walking on flat ground with an Anymal robot. Manually designing rewards for each skill, such as in [1], would indeed outperform our approach which does not have direct access to ground truth rewards. Please note that tuning such ground truth reward functions for new skills is not trivial and can be time-consuming. However, the goal and contribution of our work are fundamentally different. We demonstrate that it is possible to transfer locomotion skills from a large, diverse, and noisy dataset of wild animal videos to physical robots. While our results are currently limited to simple skills, **we argue that our proof of concept highlights the potential of systematically leveraging video data for locomotion tasks, offering better scalability** compared to manual reward design. \\n\\n---\\n\\n1. Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning, 2021\"}",
"{\"metareview\": \"The paper introduces Reinforcement Learning from Wild Animal Videos (RLWAV), which uses a video classifier trained on large-scale wild animal footage to reward a quadruped robot\\u2019s locomotion policy. By combining classifier scores with a few hand-designed reward terms, the policy learns walking, jumping, and standing behaviors in simulation and then transfers them to a real Solo quadruped. The key claim is that reference trajectories are unnecessary because animal videos provide diverse, natural examples of movement.\", \"strengths\": \"-- Clear Presentation: Methodology and experiments are explained coherently.\\n\\n-- Real-World Transfer: Demonstrates feasibility on an actual quadruped robot.\", \"weakness\": \"-- Incomplete Positioning: Still relies on additional hand-crafted rewards (e.g., symmetry), which is underemphasized.\\n\\n-- Limited Ablations: Lacks thorough studies on dataset size, animal variety, and reward component importance.\\n\\n-- Lack of Comparisons: No strong baselines or simpler reward models for fair benchmarking.\\n\\nAfter carefully reading the paper, the reviews and rebuttal discussions, the AC find while the concept is innovative, the results are somewhat limited. The AC agrees with the majority of reviewers on rejecting the paper.\", \"additional_comments_on_reviewer_discussion\": \"See the weakness and comments above, while some of the reviews' concerns are addressed, there are still remaining concerns.\"}",
"{\"comment\": \"We hope that our new experiments and explanations addressed your concerns and are eager to hear your opinion before the end of the rebuttal.\"}",
"{\"comment\": \"Thank you for your thoughtful response.\\n\\nWe acknowledge that symmetry and air time are expedients. However, these do not detract from the generality of our results, particularly within the scope of existing research in this area. Below, we provide essential context that could shift your perspective on the significance of our results, justifying better the positioning of our paper. We also tried to better adjust our wording to match your recommendation.\\n\\n---\\n\\n**1. Imitating wild animal videos is harder than you seem to realize.**\\n\\nFor clarity, we included only some of the most identifiable animal videos in the rebuttal video, which may underplay the difficulty of our setting. However, the videos are generally noisier and the actions unclear. To illustrate this, we compiled some other videos used to train our reward model [in this video](https://www.dropbox.com/scl/fi/h31t05ekwhyiul9b008nf/AK_compilation.mp4?rlkey=uhj1mgpittjqv4fdh8ixzb2w1&st=oq0e3sey&dl=0). Viewing these should make the challenges evident and underscore the impressiveness of our results. \\n\\nCompared to other works on visual imitation from large video datasets, the most relevant (large cross-embodiment and deployment on a real robot) is perhaps [1], although it focuses on manipulation. Even though only a subjective comparison is possible, we believe our cross-embodiment setting is more pronounced, our dataset noisier, and our demonstrated skills more complex.\\n\\nWhile we do not claim to solve visual imitation\\u2014a challenge that will require many more breakthroughs\\u2014we firmly believe our paper represents a significant step forward, showcasing an unprecedented demonstration of extreme cross-embodiment transfer.\\n\\n---\\n\\n**2. Skill-agnostic constraints are an integral aspect of RL locomotion, as they are unavoidable.**\\n\\nThe constraints we used are ubiquitous in RL locomotion. [2, 3] for instance learn policies that can walk, crawl, leap and climb, all using a very similar set of core/skill-agnostic constraints as ours, yet they use additional skill-specific rewards and/or hand-designed terrains in simulation to ensure the emergence of the target skills.\\n\\nIndeed, these skill-agnostic constraints are obligatory: except for the three discussed below, the constraints we employ address robot-specific limitations, such as torque, or velocity limits, making them mandatory yet not inferable from animal data.\\n\\n- *Air time*: While not ideal, this constraint is widely used (ex: [2\\u20136]). One reason for this is the limitations in physics simulators, which assume rigid bodies. This leads to unrealistic behaviors, as explained in our previous reply. Addressing this via video-based rewards would require much finer motion understanding, which is a significant challenge (see above).\\n- *Symmetry regularization*: Also used in [7] for example, this constraint aligns with the natural symmetry of quadrupeds and their locomotion patterns. Our reward model observes the robot from one side, necessitating symmetry to ensure robustness (as shown in the Rebuttal Video).\\n- *Base orientation*: has negligible impact (see ablation in the paper)\\n\\n---\\n\\n**3. Modification in the paper to soften the claims**\\n\\nWe will revise the last part of the abstract to clarify that the skills emerge \\u201c*without relying on reference trajectories nor* ***skill-specific*** *rewards.*\\u201d, consistent with our modifications to the introduction. We carefully revised the introduction and found no over-claim following your remark. We will change the \\u201cconstraints\\u201d paragraph of section 3.3 to be more explicit about the importance of auxiliary constraints. We will modify the conclusion to explicit this issue in the limitation.\\n\\nWe hope that, with those arguments in mind, you will now agree with our wordings of the claims. \\n\\n---\\n\\n1. Learning Generalizable Robotic Reward Functions from \\\"In-The-Wild\\\" Human Videos, 2021\\n2. Extreme Parkour with Legged Robots, 2024\\n3. SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience, 2024\\n4. Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning, 2021\\n5. Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations, 2022\\n6. Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion, 2023\\n7. Not Only Rewards But Also Constraints: Applications on Legged Robot Locomotion, 2024\"}",
"{\"comment\": \"Thank you for the response, rebuttal video, and the additional ablations. I appreciate the acknowledgement of the results (Q2) and the videos of policies trained with and without the symmetry and air time rewards (Q3). I increased my score.\\n\\nMy main remaining concern is related to Q1. I think that this is a promising and interesting direction. However, in its current form, I feel that there is a mismatch between the paper claims and results. My suggestion is to keep working on improving the method to get to a level of performance that better substantiates the claims (namely, not reliant on hand-coded auxiliary rewards like air time and symmetry & results that better reflect the desired skills like keeping still or running) or to soften the claims to better reflect the current level of performance (video classifiers serving as an auxiliary reward & preliminary transfer results).\"}",
"{\"comment\": \"Thank the authors for their response. The rebuttal makes the paper more clear. However, I still found it insufficiently evidenced that the method can truly learn cross-embodiment skills. Therefore, I have decided to maintain my score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper introduces Reinforcement Learning from Wild Animal Videos (RLWAV), a novel method for training quadruped robots to perform locomotion skills by observing wild animal videos. The authors train a video classifier on a large dataset of labeled animal videos and use the classification scores to provide rewards for RL training. RLWAV avoids the need for reference trajectories or hand-designed rewards by transferring learned skills from animals to robots. The method is validated both in simulation and on a real quadruped robot (Solo-12), demonstrating the transfer of skills such as walking and jumping.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. Learning quadruped robot locomotion skills from existing wild animal locomotion is a good inspiration.\\n2. The task setup and experimental details are described clearly in the paper.\", \"weaknesses\": \"1. The current ablation study of the classifier training set is inadequate, making it hard to determine whether the method effectively utilizes cross-embodiment skills acquired from a diverse range of wild animal videos. The ablation should encompass factors such as the size of the training set and the number of different types of animals included in it.\\n2. While we anticipate gaining insights into four-legged movement skills from wild animal datasets, the only information we can provide the robot is the output of a classifier. This classifier appears to be able to achieve its task by focusing primarily on the background and the animal's torso, neglecting the movement of the four legs.\", \"questions\": \"1. If we do not use the trained classifier as a rewarder, but instead manually assign simple rewards to encourage the quadruped robot to stay still, walk forward, or jump, how effectively can the robot learn these skills?\\n2. How would the method perform if the output criterion of the classifier is only the movement of the robot's center of mass?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces an interesting idea to learn reward function for locomotion through wild animal video. The authors curate a dataset of 8.7 wild animal videos, train a video classifier and then use it as a reward model to train RL policy to control robot for locomotion. The multi-skill policy can be trained in a physical simulator and transfer to real world.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea is novel.\\n2. The paper is well written. Easy to follow.\\n3. Experiments and ablation among its own algorithm shows effectiveness of the proposed method.\", \"weaknesses\": \"1. It seems the paper lacks comparison to some baseline or other works. For example, can we compare the results in sim w/ some hand crafted reward models? Then you can compare sample efficiency of the proposed method.\\n2. Would like to know how large the animal dataset needs to be to make it work. This work uses 8.7K videos. Do we need more or it can work w/ less? Can we add an ablation on it?\", \"questions\": \"The paper claims that the reward model can help policy learn well in simulation and then successfully transfer to real. However, to me the \\\"transfer to real\\\" part seems orthogonal to the reward model itself. Could author explain why better reward model can lead to better sim-to-real transfer? For example, if we use a hand crafted reward function in the same setup and learn a policy in sim, can it also transfer to real? My impression is the answer should be yes.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper trains a supervised video classification model on a dataset of wild animal videos (walking, running, standing, and jumping). It then uses the video model classifications as rewards to train a policy to control a quadroped robot in simulation. The policy is then transferred onto a quadroped robot in the real world.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper studies an interesting problem of learning reward models from videos\", \"The proposed approach is interesting and in a good direction\", \"The paper is well written and the presentation is clear\"], \"weaknesses\": [\"Position of the paper (title, abstract, intro) is a bit misleading. It suggests that the reward function would come purely form videos. However, the approach uses a number of hand-designed reward terms such as air time, symmetry, and terminations. I think that this is ok but the positioning of the paper should be updated to reflect that. In the current version of the approach, the video model serves only as part of the overall reward function.\", \"The results are promising but overall limited. Looking at the supplementary materials video it looks like the learnt skills do not quite match the desired behaviors, \\\"keeping still\\\" seems to be moving and \\\"running\\\" does not seem to be running.\", \"It would be good to ablate the impact of each of the reward terms. The current version of the manuscript includes the symmetry loss ablation which shows that the symmetry term plays a considerable role.\"], \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1EEst6oDU7 | Informing Reinforcement Learning Agents by Grounding Language to Markov Decision Processes | [
"Benjamin Adin Spiegel",
"Ziyi Yang",
"William Jurayj",
"Ben Bachmann",
"Stefanie Tellex",
"George Konidaris"
] | While significant efforts have been made to leverage natural language to accelerate reinforcement learning, utilizing diverse forms of language efficiently remains unsolved. Existing methods focus on mapping natural language to individual elements of MDPs such as reward functions or policies, but such approaches limit the scope of language they consider to make such mappings possible. We present an approach for leveraging general language advice by translating sentences to a grounded formal language for expressing information about *every* element of an MDP and its solution including policies, plans, reward functions, and transition functions. We also introduce a new model-based reinforcement learning algorithm, RLang-Dyna-Q, capable of leveraging all such advice, and demonstrate in two sets of experiments that grounding language to every element of an MDP leads to significant performance gains. | [
"Language Grounding",
"RLang",
"RL",
"Formal Language",
"LLM"
] | Reject | https://openreview.net/pdf?id=1EEst6oDU7 | https://openreview.net/forum?id=1EEst6oDU7 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"y4obGTDlkz",
"tHc9tgHBpw",
"qL8eRgWiQB",
"nmdGQGeY2i",
"lUNcARhxny",
"knjD7rnW1Q",
"jWI2WOYxYf",
"ih47bYCsvF",
"ifkuLxRN14",
"ekWcFj0jhA",
"c8QbhuTO5z",
"ZLcXfCABEI",
"XHTdONm6m8",
"UiUlxosdd9",
"UH0I5ANDsf",
"KxWGCKN4kT",
"7rVE7jXGzw",
"7R35BUsdvd",
"72foO3YhiW",
"66lKJ2s6c5",
"0zkWLH7TOA"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment"
],
"note_created": [
1732934880254,
1731471123185,
1732899498815,
1731617934724,
1732517126464,
1732808303104,
1729892096636,
1732226472842,
1730131125307,
1731470980844,
1731256932176,
1734750828100,
1733198480574,
1729993335145,
1731471311536,
1732226773468,
1732226838103,
1731471411801,
1737523462010,
1731471339920,
1732226821645
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Reviewer_3iuj"
],
[
"ICLR.cc/2025/Conference/Submission1640/Reviewer_8geU"
],
[
"ICLR.cc/2025/Conference/Submission1640/Reviewer_q6RU"
],
[
"ICLR.cc/2025/Conference/Submission1640/Reviewer_8geU"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Reviewer_EByn"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Reviewer_q6RU"
],
[
"ICLR.cc/2025/Conference/Submission1640/Area_Chair_c7Qb"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Reviewer_3iuj"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1640/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for the feedback. Can you elaborate more on the experimental setups that you believe readers would want to see to be convinced, given that we are grounding language to a discrete tabular setting? We have evaluated our method on over half a dozen environments. Furthermore we are able to ground information to every element of the MDP---to our knowledge no other work has done this---and introduced a new tabular RL agent for language grounding based on DynaQ. The symbol-grounding demonstration in sec 4.3 shows that human annotated grounding files (which must only be written once for an entire class of environments) can be automated in part by VLMs. VLMs do not need to be directly integrated into the pipeline, and we don't a priori expect VLMs to do all the symbol-grounding work for us. Quantitative results for such a demonstration are somewhat tangential to the thrust of our work, as it would essentially be a separate evaluation of the capabilities of VLMs. We do report quantitative results for the LLM groundings with a user study, however, which has been moved to the appendix.\\n\\nUnfortunately, we are unable to make edits to the paper this late in the review process. We believe there is a language+RL community at ICLR that would be interested in this approach to grounding language, and who would find our methodology and demonstration using numerous experiments convincing. We humbly ask you to consider this when making your final decision.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their valuable feedback and thoughtful comments. We address their comments below.\\n\\n**Weaknesses**\", \"regarding_weakness_1\": \"Your point that the dense, verbose, and low-level expert advice is potentially at a similar level of abstraction to the pre-defined symbols is well-taken. However, we don\\u2019t suggest that this should be surprising, and argue that more granular advice is objectively easier to operationalize than more abstract advice. Such advice relies less on the common-sense capabilities of LLMs and more on their capacity to perform approximate machine translation.\", \"regarding_weakness_2\": \"We agree with your suggestion to elucidate how various kinds of advice impact performance, and have included the following paragraph at the end of section 4.2 to address these questions. Unfortunately this will displace the table of results for the user study to the appendix, but we believe this analysis is more central to the thrust of the paper.\\n\\n> The impact of each kind of advice (e.g. plans, policies, transitions, and rewards) varied across tasks in the VirtualHome and Minigrid experiments, with some environments benefiting primarily from plan-centric advice and others benefiting most from policy advice. In virtually all cases, model-centric advice---about transitions and rewards---was less valuable than other forms of advice. We suggest that this discrepancy is due to how useful model-based advice is in comparison to explicit policy and planning advice. While policy and planning advice describe which actions to take in a given context, model-based advice was often used to suggest which actions \\\\textit{not} to take, relying on the underlying learning agent to find the best action. Furthermore, model-based advice was useful less of the time, i.e. in fewer states. This is best illustrated by comparing the relative performance of effect-enabled RLang-Dyna-Q agents with policy and plan-enabled agents in the MidMazeLava Experiment in Figure \\\\ref{fig:midmazelava} and the FoodSafety Experiment in Figure \\\\ref{fig:foodsafety}. The model-based advice in the first experiment is to avoid lava, which there are many opportunities to walk into, resulting in the performance of the effect-enabled agent closer to the plan and policy-enabled agents. By comparison, the model-based advice in the second experiment is more niche, accounting only for a handful of transitions, and the effect-enabled agent correspondingly performs closer to baseline Dyna-Q than to the plan and policy-enabled agents.\\n>\", \"regarding_weaknesses_3_and_4\": \"We have increased the fonts in those figures from size 14 to 17 for smaller text and from 20 to 24 for titles and axes labels. We hope they are more legible now! We have also made the text casing match in Figure 7.\\n\\n**Questions**\", \"regarding_question_1\": \"Expert advice was given by two students who were familiar with the environments, including the action (i.e. a go-to skill) and perception space of the agent (e.g. that the agent sees the world in terms of objects and primitive predicates).\\n\\nWe added an aside to a sentence in the third paragraph in section 4.1 explaining a bit about human experts:\\n\\n> For each environment, we collected multiple pieces of natural language advice from human experts---people familiar with both the environment and how the agent interacts with it via perception and action, i.e. the skills the agent has access to and the fact that its perception consists of objects and simple predicates\\n>\", \"regarding_question_2\": \"We do not have ablations for HardMaze. In contrast with the other MiniGrid environments, DynaQ was not able to achieve any reward in this environment due to its long-horizon nature. Our goal with this experiment was to show that language advice could make the difference between some reward and none at all \\u2014 as you may notice, the returns were relatively modest compared to other environments, but significant.\", \"regarding_question_3\": \"The combined agent is worse than the plan-informed agent due to the effect advice, which decreases performance due to non-determinism in the VirtualHome simulator which is an unintended bug and not a feature of the environment. We note this in the description for Figure 6.\\n\\nWe again thank the reviewer and welcome any additional questions, comments, and concerns they may have.\"}",
"{\"comment\": \"I wanted to kindly follow up on my earlier response to your review. We\\u2019ve worked hard to address your comments and would greatly appreciate any further feedback you may have.\"}",
"{\"comment\": \"I appreciate the authors' timely response. They have adequately addressed my comments about weaknesses 3-4 and my questions.\\n\\nWeakness 1 (or rather a limitation): In my opinion, assuming access to detailed, almost program-like natural language advice sounds not so different from directly defining the RLang files (perhaps with Copilot in a comment-tab style). This is not a detrimental weakness but still a limitation and could be a bit disappointing for folks looking for ways to handle more natural advice like the non-expert advice in this work.\", \"weakness_2\": \"I appreciate the analysis added by the authors. Overall, I am not sure if the experimental evidence (at least in the environments presented) is strong enough to support the argument that grounding every aspect in an MDP is necessary. The demonstrated *feasibility* of grounding every part is very impressive though.\\n\\nFor these reasons, I will raise the contribution score from 2 to 3, and maintaining the overall score.\"}",
"{\"title\": \"Thanks for the response and i will keep the original score\", \"comment\": \"Agree with weakness 1. In-context learning is more cost efficient and more accessible than finetuning.\\nFor weakness 2, looking forward to the new technique that can resolve the inexpressiveness because this is an essential part for the practical adoption.\"}",
"{\"comment\": \"Thank you for the detailed rebuttal and clarifications. I appreciate the authors addressing my concerns thoughtfully.\\n\\nI still believe the experimental scope is not comprehensive enough to fully validate the framework's effectiveness across diverse RL paradigms and environments. While I understand the authors\\u2019 intent to focus on grounding language in a discrete tabular setting, demonstrating the generality of the proposed approach would require more diverse experimental setups. \\n\\nMoreover, the qualitative results in section 4.3 indeed can show an example of it, but it doesn't present metrics like its grounding accuracy, how integrating the system into RL can influence the final performance, etc. That's why the quantitative results are necessary.\\n \\nFor these reasons, I will maintain my original score,\"}",
"{\"summary\": \"The paper proposed RLang-Dyna-Q, which can ground any language advice to all components in MDP, compared to grounding only to individual components before. The solution uses in-context learning to first select the grounding type, then translate the advice to RLang program. The enhancement outperforms prior approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The algorithm automates language-to-MDP component translation, and streamlines the process of learning human decision-making for robots\\n2. The authors conducted extensive experiments and described the algorithm clearly\", \"weaknesses\": \"1. In-context learning limits the capability enhancement of the language model. It might be better if we could make the LM trainable and train the language model and the RL system end-to-end\\n2. Human language might not be expressive enough to be translated to RLang. In the experiment section, it stated that some advice cannot be converted to RLang. Could we have a more natural intermediate representation for the advice and agent?\", \"questions\": \"1. How to go beyond in-context learning?\\n2. How to handle the inexpressiveness of human language for RLang?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We thank the reviewer for their consideration -- we generally agree with your points, and we believe the greater community would be interested in the *feasibility* of this approach.\"}",
"{\"summary\": \"This pager introduces RLang-Dyna-Q, an extension of prior work, RLang, that transforms natural language into a formally specified language for solving tasks. Rather than focusing on policy-centric translations, as in much prior work, the authors observe that much of the advice or coaching offered by human experts will come in the form of statements that describe a transition function (e.g., \\\"If you miss, you will fall down\\\"), a reward function (e.g., \\\"You get 10 points when you touch the key\\\"), or a plan (e.g. \\\"Go to X, then pick up Y, then return to Z\\\"). RLang-Dyna-Q is a combination of Dyna-Q and RLang that uses the learned world model/transition functions of RLang to further refine the Q function learned in Dyna-Q.\\n\\nThe proposed RLang-Dyna-Q algorithm is compared to Dyna-Q and to a random policy in a handful of tabular RL domains, showing that it significantly outperforms Dyna-Q. The authors also perform an ablation study in which they test only subsets of RLang-Dyna-Q (only policy advice, only transition advice, or only plan advice). Finally, the authors conduct a small user study in which 10 undergraduate students provide advice to warm start an RLang-Dyna-Q, with each student contributing one piece of advice, and 5/10 pieces of advice leading to policy improvements over the baseline, un-advised policy.\\n\\n\\nAfter reviewing the rebuttal, the authors have clarified the scope of the paper and their intended contributions and research area. Given the research direction and goals are more in line with language grounding and leveraging language for tasks, rather than improving task-learning performance or RL/IL efficiency, I will raise my contribution score to 2 and my overall score to 5.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and provides a clear overview of the motivation, problem setting, and proposed solution.\", \"The paper proposes a blend of conventional planning literature and formal specification with the advancement of LLMs, leading to a significant improvement over conventional tabular RL solutions.\", \"The authors conduct a small scale user study, which solicits and leverages advice from untrained human coaches for a planning task.\"], \"weaknesses\": [\"The method is not entirely clear, particularly given the heavy reliance on prior work (RLang) in this paper. It is not clear how the Q table relates to the RLang vocabulary files or RLang declarations, and this information must be obtained by referring to and reading prior work (meaning that RLang-Dyna-Q does not entirely stand on its own, but feels more like the \\\"second half\\\" of the RLang work).\", \"The results for RLang-Dyna-Q are not very convincing, and the comparison to a method that is nearly three decades old is also not very convincing. Comparisons to more modern RL baselines would improve the work. In particular, comparing to an LLM that generates Python/programming solutions seems like a very important baseline (even if there is no RL refinement, it would be useful to simply see to what extent an advanced LLM can solve these tabular domains out-of-the-box).\", \"The advice required to make RLang-Dyna-Q actually improve over baselines seems very particular. For example, looking at the advice in Figures 3-6, there is a combination of plans, general advice, and state transition advice. There is not a discussion or written analysis on what types of advice work best, or why. Similarly, the success of different types of advice seems extremely finicky. Comparing advice from participants 5 and 10 in the user study, the written advice is nearly identical. However, the performance deltas are quite significant (from a 33% increase to just a 2% increase).\"], \"questions\": [\"Why not compare to conventional RL methods (e.g., PPO with a small neural network), to RLang itself, or to LLMs that generate code for plans?\", \"Why cut training off at 30-75 episodes, which is quite a small budget given that these are not expensive or safety-critical domains? It seems that one argument for RLang-Dyna-Q is that it could be is significantly more efficient than modern RL baselines by leveraging human advice, but if so then this should be shown by empirical comparisons (e.g., how many episodes does each method require to achieve maximum returns?).\", \"What differentiates good vs. bad advice for RLang-Dyna-Q? The user study provides great insight into the effects of different natural language prompts for the method. However, at times the prompts appear semantically identical, but they yield different results.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your insightful observations and questions. We've responded to them below.\", \"regarding_weakness_1_and_question_1\": \"This is a good observation, a more integrated system might train a language model and an RL agent end-to-end. For the goals of this paper, however, in-context translation sufficed to show that language could be used to inform RL agents. We acknowledge this interesting research direction and would be excited to see it addressed in future work.\", \"regarding_weakness_2_and_question_2\": \"The reviewer raises an important question about the expressivity of natural language and RLang, and how variations in expressivity between the two languages can make effective translation difficult. These concepts have been explored somewhat in the machine translation and semantic parsing communities. A potential solution\\u2014as the reviewer suggests\\u2014might be to introduce another intermediate representation for natural language such as LambdaDCS, a general logical form, and compile this language into RLang, an MDP-theoretic specification language. We note, however, that RLang enables the introduction of novel syntax via Vocabulary Files, which we have leveraged in these experiments to increase the expressivity of RLang itself, bypassing the need for another intermediate language. In future work, we hope to automate this process of semantic expansion so that more language may be grounded using this methodology.\\n\\nWe welcome any additional questions or critiques.\"}",
"{\"summary\": \"The paper proposes a framework to leverage natural language-based advice to accelerate RL learning process. The RLang-Dyna-Q algorithm extends the original RLang framework and combines it with the traditional Dyna-Q. Empirical results over two sets of experiments help verify the effectiveness of propose algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing is overall good and easy to follow.\\n2. The idea of translating natural language advice to RLang and using RLang to generate synthetic transitions makes sense.\\n3. The writing flow in the experiment is great \\u2013 sec 4.1 and 4.2 present two effective cases with assumptions on semantically-meaningful labels, while sec 4.3 also presents efforts to try to address this assumption. Also, user study has been completed in Table 2.\", \"weaknesses\": \"1. Only Q-learning-based RL is tested in the experiment. More advanced and modern RL algorithms are needed to show the generality, e.g. PPO.\\n2. More LLM + RL baselines are needed. There are a few simple alternatives to directly leverage LLM to process natural language advice to help RL training. For example, what if we don\\u2019t use any RLang-based program, and only treat LLM\\u2019s as the generator for actions and transitions?\\n3. Another important assumption (and limitation) in the paper is that each environment will be provided with human-annotated natural language advice. This is a strong prior compared with all RL baselines. The author needs to discuss more about this assumption and whether we can use any other ways to bypass the need for human labels. For example, could LLMs directly generate advice for any given environment?\\n4. More qualitative results are needed for section 4.3 (a demo is not enough)\", \"questions\": \"1. Any idea why the DynaQ baseline doesn\\u2019t work in Figure 6\\u2019s experiment?\\n2. Typo in line 164.\\n3. If we are going to extend the algorithm to high-dimensional continuous RL problem, what could be the biggest challenges?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": [\"The paper introduces RLang-Dyna-Q, an extension of the RLang framework, which grounds natural language advice in a formal language (RLang) and integrates it into the Dyna-Q algorithm to improve reinforcement learning (RL). The method leverages human-provided language to enhance the learning of RL agents, particularly in transition and reward functions. Empirical results show improvements in RL tasks using natural language advice from human experts.\", \"Reasons to accept\", \"The idea of grounding natural language advice to improve RL performance is novel and well-motivated.\", \"The paper presents multiple experiments across domains like Minigrid/BabyAI and VirtualHome, demonstrating the benefits of the proposed approach.\", \"The inclusion of a user study, where human participants provide advice, offers valuable qualitative insights into how different types of language inputs influence performance.\", \"The method works towards reducing the reliance on dense human annotations, which shows potential for real-world application.\", \"Reasons to reject\", \"The experiments primarily focus on Q-learning, which is considered somewhat outdated. Comparisons to more advanced and widely used algorithms, such as SAC and PPO, are missing.\", \"Performance improvements are tied to specific types of advice, and it is not clear which types of advice are most effective or how to generalize them across tasks.\", \"A major limitation is the assumption that high-quality advice must be provided by human experts.\", \"The reliance on in-context learning for language grounding may restrict the model's ability to adapt and refine its understanding. Making the language model trainable alongside the RL agent could yield better results.\", \"The method's connection to prior work (RLang) is not fully clear, and the paper requires readers to reference previous work for a complete understanding. The algorithm is presented as an extension rather than a standalone solution.\", \"While this paper studies a promising research direction and presents an interesting approach, I believe its weaknesses outweigh its strengths. Consequently, I recommend rejecting the paper.\"], \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, three reviewers acknowledged the author's rebuttal.\"}",
"{\"title\": \"Summary of Review and Rebuttal Process for ACs and SACs\", \"comment\": \"Some in the community have suggested that a concise summary of reviews and rebuttals would assist reviewers, ACs, and SACs during the next part of the review process. We have tried our best to summarize the main strengths and weaknesses mentioned in our reviews, as well as our responses to them.\\n\\nIn terms of readability, all of the reviewers remarked that the paper was well-written, easy to follow, well-contextualized, and well-motivated, with one praising the writing flow in the experiments section in particular (reviewer q6RU, score: 5) and another stating, \\u201cThis is an important research topic and the contribution is contextualized nicely\\u201d (reviewer 3iuj, score: 6). Some of the reviewers were impressed with the quantity, variety, and quality of experiments performed (reviewer 3iuj, score: 6), with one reviewer highlighting our \\u201cextensive experiments\\u201d (reviewer 8geU, score: 6), which included experiments done across two sets of environments (6 total environments with our agent, multiple ablations, and a baseline), a user study, a two-tiered symbol-grounding demo using a VLM, and an additional experiment comparing directly to an additional baseline (SayCan). Others were unimpressed by the experiments, with one citing DynaQ as a poor point of comparison that is \\u201cnearly three decades old\\u201d and suggesting a comparison to a deep-learning based agent and an LLM without any RL (reviewer EByn, score: 3). Our work, however, is precisely on how to integrate language into RL, and our language-informed RL agent called RLang-DynaQ is based on DynaQ, which, due to its simplicity, is a prime candidate for a clear and minimal example showing how language can impact performance in classical model-based RL. Another stated that our VLM demonstration\\u2014an addendum demo designed to show that a VLM can replace the need for human-generated labels by using a VLM to label objects directly\\u2014needed more quantitative results, \\u201ca demo is not enough\\u201d (reviewer q6RU, score: 5). We believe a demo is sufficient to show this relatively straightforward image-labeling capability.\\n\\nOverall we found the reviews to be very constructive, and made a substantial number of updates to the paper based on helpful feedback. This included an additional full paragraph elucidating the methodology (suggested by reviewer EByn), another full paragraph dissecting the impact of various kinds of advice on agent performance (suggested by reviewers EByn and 3iuj), a sentence elaborating on how advice was collected in our user study (suggested by reviewer 3iuj), and we increased the font size on all reward plots (suggested by reviewer 3iuj). All reviewers responded positively to our updates, with reviewer 3iuj increasing their contribution from fair (2) to good (3). While most reviewers responded to our rebuttals, the author of our paper\\u2019s most critical review (EByn, score: 3), whose review we responded to in detail on November 12th (the day that reviews were released), has not yet responded to our rebuttal at the time of this comment. We note that the average score reported for all reviewers who engaged with us in the rebuttal process is (6+6+5)/3 = 5.666.\"}",
"{\"summary\": \"This paper studies the problem of leveraging natural language advice for training effective/efficient RL agents. The authors hypothesize that some natural language advice is more suitable to be grounded in reward function while others are better captured in transition functions. They further suggest that leveraging natural language advice by grounding them in a formal language (RLang) that is capable of describing all aspects of the system, is better than advising a subset of the system.\\n\\nThe authors adapt Dyna-Q to work with grounded advice in RLang. They evaluate this method in Minigrid and VirtualHome with mostly positive results to support their hypothesis.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. I appreciate the range of experiments included, as well as a comparison with SayCan in the appendix.\\n2. I also enjoy reading the section on user studies, and section 4.3 on automated semantic labeling for disambiguation advice. In general, I agree that gradually removing the need for expert annotations is important, let it be dense natural language advice, crafted vocabulary, or RLang grounding files in this case.\\n3. This is an important research topic and the contribution is contextualized nicely.\\n4. Most of the paper is quite clear. A few improvements can be made - see weakness 3.\", \"weaknesses\": \"1. Expert advice seems much more dense, verbose, and low-level (almost program-like) than non-expert advice. It is not completely surprising to me that LLMs can ground them to predefined symbols that are approximately defined on a similar level of abstraction.\\n2. It might help to have a paragraph discussing results and how advice on effect/policy/plan each contributes to the combined policy. Are they the same? Is it task-dependent? I think this can help better justify that an approach to encode \\\"information about every element of an MDP\\\" is necessary.\\n\\n(The two concerns above are why I gave 2 for contribution and not higher. Would be happy to improve scores if they are addressed)\\n\\n3. Stylistic nit-picking: could you please increase the font size in Figure 1, and reward curves in Figure 2-6? The text in Figure 7 looks much better. Perhaps capitalize \\\"Perception\\\" in the title in the left figure for consistency. Consistent legend colors and orders for different methods on page 8 would improve comparability across figures.\\n4. Broken reference on line 164.\", \"questions\": \"1. How was the expert advice (textboxes on page 8) collected for the main experiments (who are the experts, how are they trained, what's the protocol)?\\n2. Do you have ablation studies for HardMaze in Figure 4?\\n3. Why is RLang-Dyna-Q-combined worse than RLang-Dyna-Q-plan curve in Figure 6?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Part 1 of our Initial Response\", \"comment\": \"We are grateful to the reviewer for their careful evaluation and helpful comments. Please find our responses to the critiques below.\\n\\n**Weaknesses**\", \"regarding_weakness_1\": \"Upon revisiting the method-related sections of the paper, we agree with your point that the paper heavily relies on the cited RLang paper, so we have added the following sentence at the end of section 3.1 to help explain to the reader how RLang-DynaQ works:\\n\\n> Similar to Dyna-Q, RLang-Dyna-Q leverages the Bellman update rule to update Q-values using rollouts collected both from environment interaction and from simulated interaction, which is generated from a partial model of the environment that is learned over time. However, RLang-Dyna-Q also leverages a partial model given by an RLang program to generate simulated rollouts before learning begins (see Algorithm \\\\ref{alg:rlang-dynaq}, our modifications to Dyna-Q are in blue).\\n> \\n\\nAnd also have added the following sentence at the end of section 3 before section 3.1 to explain what the original RLang work does:\\n\\n> These programs are compiled using RLang's compiler into Python functions corresponding to transition functions, reward functions, policies, and plans that can be leveraged by a learning agent.\\n> \\n\\nWe hope that these adjustments make this work feel more stand-alone.\", \"regarding_weakness_2\": \"We appreciate the reviewer\\u2019s comments on our usage of DynaQ and how we may compare our work to other baseline agents. However, we argue that DynaQ is reasonable tabular RL algorithm to base our RLang-enabled agent on and compare it to due to its simplicity and stature as an early discrete, model-based RL algorithm that was one of the first of its kind to learn from model-based simulated rollouts. Our goal in this work is to demonstrate how language can be used to inform a tabula rasa agent, and our choice of DynaQ was motivated by the simplicity of a discrete, tabular agent where various MDP components could more directly ground to in comparison to more modern deep learning methods in which integrating MDP components is less obvious. Integrating natural language advice into such deep RL algorithms is a pressing and interesting area that we leave open for future work.\\n\\nRegarding a comparison to an LLM that can generate a programming language-based solution, this is essentially how the RLang-DynaQ-Plan and RLang-DynaQ-Policy agents work. RLang's syntax for plans and policies is similar enough to Python for LLMs to understand it out of the box. Both of these agents defer to RLang plans and policies over learning itself---these agents will always choose to execute RLang plans and policies when they are applicable, performing essentially the same as an LLM+Python agent if the policies or plans are totally comprehensive. We again point out that our method is about integrating language advice into a reinforcement learning agent, *not* about maximizing agent performance. Under this framing, a comparison to LLM+Python is not very relevant.\"}",
"{\"comment\": \"We appreciate the time and effort you have taken to provide feedback on our submission. We have worked diligently to address your comments and concerns and believe the revisions we've made as a result significantly improve the paper. If our responses and updates resolve your questions, we kindly ask you to consider revisiting your score. Please do not hesitate to let us know if you have additional feedback or concerns.\"}",
"{\"comment\": \"We appreciate the time you have taken to provide feedback on our submission. We have worked diligently to address your comments and concerns. If our responses and updates resolve your questions, we kindly ask you to consider revisiting your score. Please do not hesitate to let us know if you have additional feedback or concerns.\"}",
"{\"comment\": \"We appreciate the reviewer\\u2019s insightful feedback and constructive suggestions. We have addressed their concerns below.\\n\\n**Weaknesses**\", \"regarding_weakness_1\": \"The reviewer correctly points out that we only use Q-Learning-based RL methods in this work. Our goal in this work, however, is to demonstrate how language can be used to inform a tabula rasa agent, and our choice of DynaQ was motivated by the fact that by choosing a discrete tabular agent, we were better able to isolate the impacts of our language grounding approach on learning in comparison to more modern deep learning methods. Integrating natural language advice into such deep RL algorithms is a pressing and interesting area that we leave open for future work.\", \"regarding_weakness_2\": \"We appreciate the reviewer\\u2019s suggestions about including LLM + RL baselines, but we point out that the suggested baseline the reviewer discusses is essentially how the RLang-DynaQ-Plan agent works\\u2014in this case, RLang acts as an intermediary to convert action plans into a policy that can be run by the RL agent. Regarding comparisons to other agents, the goal of this work is to demonstrate that grounding language to every component of an MDP can improve performance, and our experiments demonstrate this. We don\\u2019t claim that this is the most efficient way to ground language, and we welcome follow-up works that can ground language without using RLang.\", \"regarding_weakness_3\": \"The topics the reviewer mentions have been the subject of existing works, but the focus of our work is precisely on how to ground human-given language in RL. The assumption of language for an environment is part of the problem statement we aim to solve, and not a limitation.\", \"regarding_weakness_4\": \"We believe that the symbol-grounding via VLM demonstration in section 4.3 shows that VLMs can be used to obviate the need for some human annotations. The symbol-grounding problem is outside of the scope of this work, but we invite the reviewer to elaborate on what kind of qualitative results would aid a reader in finding this method convincing.\\n\\n**Questions**\", \"regarding_question_1\": \"This is a good question, similar to one that was raised by Reviewer 3iuj. We believe DynaQ performs poorly in this environment due to non-determinism in the VirtualHome simulator. This is not an intended feature of the environment. We note this in the description for Figure 6.\", \"regarding_question_2\": \"Typo fixed, thanks!\", \"regarding_question_3\": \"This is an important question and we thank the reviewer for raising it. Our approach relies on RLang, a formal symbolic language, for capturing language advice. While natural language itself is symbolic and discrete, MDPs may not be, and this raises an important question on how to bridge the gap between a symbolic syntax and a continuous semantics, or meaning. One possible solution could be to represent relevant symbolic action and perception abstractions with continuous functions. For example, representing a \\u201cKick\\u201d skill with a Dynamic Movement Primitive or a \\u201cis_open()\\u201d predicate with a Convolutional Neural Network. This question suggests many open avenues for future work.\\n\\nWe again thank the reviewer for their questions and comments, and welcome any additional feedback.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Part 2 of our Initial Response\", \"comment\": \"Regarding Weakness 3:\\n\\nYour point that the results of this method are somewhat finicky is well-taken. We have addressed some of your concerns by adding a paragraph at the end of section 4.2 to discuss how various kinds of advice impact the performance of agents:\\n\\n> The impact of each kind of advice (e.g. plans, policies, transitions, and rewards) varied across tasks in the VirtualHome and Minigrid experiments, with some environments benefiting primarily from plan-centric advice and others benefiting most from policy advice. In virtually all cases, model-centric advice---about transitions and rewards---was less valuable than other forms of advice. We suggest that this discrepancy is due to how useful model-based advice is in comparison to explicit policy and planning advice. While policy and planning advice describe which actions to take in a given context, model-based advice was often used to suggest which actions \\\\textit{not} to take, relying on the underlying learning agent to find the best action. Furthermore, model-based advice was useful less of the time, i.e. in fewer states. This is best illustrated by comparing the relative performance of effect-enabled RLang-Dyna-Q agents with policy and plan-enabled agents in the MidMazeLava Experiment in Figure \\\\ref{fig:midmazelava} and the FoodSafety Experiment in Figure \\\\ref{fig:foodsafety}. The model-based advice in the first experiment is to avoid lava, which there are many opportunities to walk into, resulting in the performance of the effect-enabled agent closer to the plan and policy-enabled agents. By comparison, the model-based advice in the second experiment is more niche, accounting only for a handful of transitions, and the effect-enabled agent correspondingly performs closer to baseline Dyna-Q than to the plan and policy-enabled agents.\\n> \\n\\nWe note that this has bumped the user study table to the appendix.\\n\\nRegarding your specific comparison of the 5th and 10th piece of advice in the user study, we note that the advices are semantically different in a crucial sense: that the 10th piece of advice makes no mention of the grey door, which must be opened before going to the red door, while the 5th piece of advice explicitly addresses opening the grey door. We agree that this seems like a small difference, but when grounding the advice to an executable plan it makes a meaningful difference. We have included the following sentence at the end of section 4.4 to address your concern about the final piece of user advice:\\n\\n> Failures also occurred when users specified plans whose pre-conditions were not met at the start state of the environment and failed to execute (e.g. the last piece of advice suggests to go to the room with the red key, but the agent cannot visit the room without first opening the grey door).\\n> \\n\\n**Questions**\", \"regarding_question_1\": \"We don\\u2019t compare to conventional RL methods, RLang on its own, or LLMs because the goal of this work is to propose a method for leveraging human language advice that can be used to improve the performance of RL agents. Specifically, we demonstrate how advice about various components of MDPs---including reward functions, transition functions, policies, and plans---can be integrated comprehensively into a single model-based RL agent. We compare our agent (RLang-DynaQ) to a structurally-identical agent (DynaQ) that does not use language advice. We don\\u2019t claim to perform competitively against modern deep RL methods, as the center of our work is on language grounding, not maximizing agent performance. We believe this work would be valuable to the language-informed RL community.\", \"regarding_question_2\": \"We cut training off after a small number of episodes because they are sufficient to demonstrating that language-informed agents can learn faster than uninformed agents. Language advice can be extremely potent, and its effects on performance can be seen immediately in most cases. We note that the reward charts used in this paper do not plot average episodic reward on the Y-axis, they plot **cumulative reward**, i.e., when the reward curves achieve a stable slope (not a slope of 0), it means the agents have converged, yielding the same amount of reward on each time step. In nearly all experiments we run all agents until they plateau, i.e. when cumulative reward has reached a stable slope.\", \"regarding_question_3\": \"We address this question in our response to Weakness 3.\\n\\nWe again thank the reviewer for their comments and questions, and welcome any additional feedback.\"}",
"{\"comment\": \"We appreciate the time and effort you have taken to provide feedback on our submission. We have worked diligently to address your comments and concerns and believe the revisions we've made as a result significantly improve the paper. If our responses and updates resolve your questions, we kindly ask you to consider revisiting your score. Please do not hesitate to let us know if you have additional feedback or concerns.\"}"
]
} |
1DVgysiIt7 | Improved Diffusion-based Generative Model with Better Adversarial Robustness | [
"Zekun Wang",
"Mingyang Yi",
"Shuchen Xue",
"Zhenguo Li",
"Ming Liu",
"Bing Qin",
"Zhi-Ming Ma"
] | Diffusion Probabilistic Models (DPMs) have achieved significant success in generative tasks. However, their training and sampling processes suffer from the issue of distribution mismatch. During the denoising process, the input data distributions differ between the training and inference stages, potentially leading to inaccurate data generation. To obviate this, we analyze the training objective of DPMs and theoretically demonstrate that this mismatch can be alleviated through Distributionally Robust Optimization (DRO), which is equivalent to performing robustness-driven Adversarial Training (AT) on DPMs. Furthermore, for the recently proposed Consistency Model (CM), which distills the inference process of the DPM, we prove that its training objective also encounters the mismatch issue. Fortunately, this issue can be mitigated by AT as well. Based on these insights, we propose to conduct efficient AT on both DPM and CM. Finally, extensive empirical studies validate the effectiveness of AT in diffusion-based models. The code is available at https://github.com/kugwzk/AT_Diff. | [
"Generative Model; Adversarial Robustness; Diffusion Model; Distributional Robustness Optimization"
] | Accept (Poster) | https://openreview.net/pdf?id=1DVgysiIt7 | https://openreview.net/forum?id=1DVgysiIt7 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"t80JQGSNjy",
"rJa6plotef",
"prH6uh7oZX",
"m3MOcF57g7",
"ktVMVdGfdj",
"jtEIiX0LKH",
"iaHag05KL0",
"c3TFCJhKst",
"XbdBoTH2Kb",
"Vxvyq9iJmY",
"VcOKdFIZf2",
"QomD8KSG0R",
"IbOrbSzYK8",
"GoBEkrdwBC",
"Cii4nbvU8r",
"3Qs4LKsAko"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1730719796715,
1732432889415,
1732298692050,
1734839684222,
1730759985108,
1732672769940,
1732298674243,
1732298674279,
1730107758718,
1737523913750,
1732298685449,
1732520043749,
1732429451705,
1732298871538,
1732298818862,
1730317307802
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8504/Reviewer_ouMQ"
],
[
"ICLR.cc/2025/Conference/Submission8504/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8504/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8504/Area_Chair_JWwj"
],
[
"ICLR.cc/2025/Conference/Submission8504/Reviewer_tM1A"
],
[
"ICLR.cc/2025/Conference/Submission8504/Reviewer_gZkr"
],
[
"ICLR.cc/2025/Conference/Submission8504/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8504/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8504/Reviewer_gZkr"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8504/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8504/Reviewer_ouMQ"
],
[
"ICLR.cc/2025/Conference/Submission8504/Reviewer_eJv3"
],
[
"ICLR.cc/2025/Conference/Submission8504/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8504/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8504/Reviewer_eJv3"
]
],
"structured_content_str": [
"{\"summary\": \"This paper studies the training of unconditional diffusion model. In particular, in order to achieve a better generation quality and enable robust learning of the score network, this paper develops a DRO-based method, and prove the DRO objective in training diffusion models can be formulated as an adversarial learning problem. The paper also identifies a similar mismatch issue in the recently proposed consistency model (CM) and demonstrates that AT can address this problem as well. The authors propose efficient AT for both DPM and CM, with empirical studies confirming the effectiveness of AT in enhancing diffusion-based models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper performs a theoretical analysis of diffusion models and identifies the distribution mismatch problem.\\n\\n2. This paper further builds a connection between the distribution robust optimization and adversarial learning for diffusion models, and develops an adversarial training method for diffusion models.\\n\\n3. This paper conducts efficient adversarial training methods on both diffusion models and consistency models in many tasks. Experimental results demonstrate the effectiveness of the developed algorithms.\", \"weaknesses\": \"1. In general, the algorithm developed in this paper is motivated by the distribution mismatch along the diffusion path. However, there is no experimental results to justify the motivation, there are also no experimental results to verify that the DRO framework can indeed help mitigate the distribution mismatch problem.\\n\\n2. Proposition 2 has already been discovered in existing theoretical papers [1], see their section 3.1. The authors should comment on this point around Proposition 2.\\n\\n3. The advantage of ADM-AT is not that significant compared with the ADM method, a more detailed ablation study or theoretical analysis on using adversarial noise or random Gaussian noise should be added.\\n\\n4. Some statements are not clearly presented. For instance, the description of ADM is not given, the norm notations $\\\\|\\\\|$ are abused, should that be $\\\\ell_1$, $\\\\ell_2$, or $\\\\ell_\\\\infty$?\\n \\n\\n[1] Chen, Lee, and Lu, Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions, ICML 2023\", \"questions\": \"1. some ablation studies for different perturbation levels $\\\\alpha$ should be given.\\n2. Some discussions about different perturbation methods ($\\\\ell_1$, $\\\\ell_2$, or $\\\\ell_\\\\infty$) should be discussed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer eJv3,\\n\\nThank you for your prompt response. We sincerely appreciate your willingness to consider upgrading the score of our work. We are glad that our reply addressed your concerns, and we welcome any further comments you may have.\\n\\nWe apologize for any inconvenience, but it seems the score has not yet been adjusted. At your convenience, we would greatly appreciate it if you could update the score.\\n\\nThank you once again for your constructive comments and suggestions.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"title\": \"General Response\", \"comment\": \"We sincerely thank all the reviewers for their thorough evaluations and valuable constructive feedback. We are encouraged that reviewers find that our paper identifies and formulates the distribution mismatch problem in the diffusion model and builds a connection between the distribution robust optimization and adversarial learning for diffusion models (Reviewer tM1A, ouMQ), provides strong theoretical support for implementing adversarial training (Reviewer gZkr, eJv3, ouMQ), and experimental results demonstrate the effectiveness both on diffusion models and consistency models (Reviewer tM1A, ouMQ).\", \"we_have_updated_our_paper_to_incorporate_suggestions_on_clarifications_and_experimental_results_as_follows\": \"(1) Regarding the suggestion of Reviewer eJv3 and gZkr, we have clarified the notation more rigorously and made the derivation more detailed.\\n\\n(2) On experimental results, we incorporate more evaluation metrics (IS/sFID/Precision/Recall), more models like SD, and evaluate models on more NFEs.\\n\\n(3) More detailed ablation studies of our AT framework are conducted, including different perturbation methods ($\\\\ell_1, \\\\ell_2$ or $\\\\ell_\\\\infty$) and adversarial learning rates $\\\\alpha$.\\n\\n(4) We also visualize the generation results for qualitative comparisons between our method and baselines.\"}",
"{\"metareview\": \"This paper proposes to use robustness-driven adversarial training to solve the distribution mismatch problem in diffusion models. Additionally, the same idea is applied to distillation methods such as the consistency model to further improve the performance. The proposed method is validated on a series of experimental evaluations. The reviewers had some concerns on the evaluation metrics and ablation studies, which were addressed in the author rebuttal. All reviewers are inclined toward acceptance, a sentiment with which I agree.\", \"additional_comments_on_reviewer_discussion\": \"Before rebuttal, all the reviewers except eJv3 showed a positive view of this paper. The main concerns are on the evaluation metrics in the experiments. The authors added the suggested metrics and showed consistently good results. Reviewer eJv3 increased their rating to 6. The other concerns are mainly about ablation studies and more visualization results. The authors did a good job in answering these questions. My decision is mainly based on the positive rating of the reviewers and also the solid rebuttal from the authors.\"}",
"{\"summary\": \"The paper proposes to introduce DRO to address the distribution matching problem at training diffusion model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper present theories to show that DRO can help address the distribution matching problem in training and testing diffusion models.\\n\\n2. The improvement over baselines on Cifar and Imagenet64 show that DRO is useful.\", \"weaknesses\": \"1. There is no qualitative comparisons. Authors mainly conduct experiments on Cifar, ImageNet and Laion dataset. It would be better to put some images for more direct comparisons. In addition, the code is not provided.\\n\\n2. The efficiency comparison. I am wondering how much overhead it brings to adopt eq 14 instead of the classical denoising objective. I am expecting that it is quite large.\\n\\nI am giving score of 6 based on the prerequisite that above two concerns are answered during rebuttal.\", \"questions\": \"as above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I thank the authors for their response and efforts. I will maintain my positive rating for this paper.\"}",
"{\"comment\": \"Thanks for your valuable comments and suggestions. Here we address your concerns as follows.\\n\\n**Q1**: More NFEs should also be verified, although this method can improve efficient sampling, whether is adaptable and robust for more denoising steps should also be verified.\\n\\n**A1**: Following your suggestion, we report the results with more NFEs (100, 200) below and add them in Appendix F.3 of the revised paper. \\n\\nTable 5. Sample quality measured by FID $\\\\downarrow$ of various sampling methods of DPM under 100 or 200 NFEs on $\\\\texttt{CIFAR10}$ 32x32.\\n|Methods-NFEs |IDDPM-100|IDDPM-200|DDIM-100|DDIM-200|ES-100|ES-200|DPM-Solver-100|DPM-Solver-200|\\n|-|-|-|-|-|-|-|-|-|\\n|ADM-FT | 3.34 | 3.02 | 4.02 | 4.22 | 2.38 | 2.45 | 2.97 | **2.97**|\\n|ADM-IP | 2.83 | 2.73 | 6.69 | 8.44 | 2.97 | 3.12 | 10.10 | 10.11|\\n|ADM-AT (Ours) | **2.52** | **2.46** | **3.19** | **3.23** | **2.18** | **2.35** | **2.83** | 3.00|\\n\\nTable 6. Sample quality measured by FID $\\\\downarrow$ of various sampling methods of DPM under 100 or 200 NFEs on $\\\\texttt{ImageNet}$ 64x64.\\n|Methods-NFEs |IDDPM-100|IDDPM-200|DDIM-100|DDIM-200|ES-100|ES-200|DPM-Solver-100|DPM-Solver-200|\\n|-|-|-|-|-|-|-|-|-|\\n|ADM-FT | 3.88 | 3.48 | 4.71 | 4.38 | 3.07 | **2.98** | **4.20** | 4.13 |\\n|ADM-IP | 3.55 | **3.08** | 8.53 | 10.43 | 3.36 | 3.31 | 9.75 | 9.77 |\\n|ADM-AT (Ours) | **3.35** | 3.16 | **4.58** | **4.34** | **3.05** | 3.10 | 4.31 | **4.10** |\\n\\nAs can be seen, our method is still effective with hundreds of NFEs. \\n\\n\\n\\n\\n**Q2**: Some complex derivations in supplementary material are too brief to understand, such as Eq(30) and Eq(59-62), I'm not sure if there are any typos in them, I suggest checking the equations carefully and modifying them.\\n\\n**A2**: Thanks for pointing out this, for these complex equations, we have revised them to be more clear in the revised version. Please check them accordingly. \\n\\n**Q3**: Consistency models on benchmark datasets such as CIFAR10 and ImageNet, which can be more common and convincing?\\n\\n**A3**: Following your suggestion, we conduct the consistency model experiments on ImageNet 64x64.\\nNote that since the limited computational resources, we can't directly use the hyperparameters as [1], instead we train the models for 300K iterations with a batch size of 512. Compared with the baseline method CM, our proposed AT method improves the one-step FID from 7.80 to 7.23. We will continue the training process and revise them into the final version. \\n\\n**Q4**: Derivations in supplementary material should be checked carefully and written with more details.\\n\\n**A4**: Thank you for your advice, we have carefully checked the derivations in the revised version, and made them more readable. \\n\\n**Q5** Why efficient AT can improve performance compared with PGD is a bit confusing.\\n\\n**A5**: As found in [2], for adversarial-type training, in classification, AT takes a balance between accuracy and robustness, i.e., obtaining robustness may sacrifice accuracy. We speculate this phenomenon also holds in the diffusion model, i.e., too strong perturbation may sacrifice the noise prediction accuracy. That explains why strong PGD has slightly worse performance than our efficient AT in some situations. \\n\\nReference\\n\\n[1] Consistency Models. Song et al., 2023.\"}",
"{\"comment\": \"Thanks for your valuable comments and suggestions. Here we address your concerns as follows.\\n\\n**Q1**: My main concern in this paper is the evaluation. Currently, the proposed method is only evaluated using the ADM model. I wonder whether the effectiveness of **more advanced model** such as the **stable diffusion** still holds?\\n\\n**A1**: Thanks for your valuable suggestion. We apply our method to LCM under Stable Diffusion v1.5 in Section 6.3, and the experimental results show the effectiveness of our method. To further address your concern, we finetune stable diffusion v1.5 under the framework of DPM by our proposed adversarial training. The evaluation results are summarized below. \\n\\nTable 5. Comparison of FID on MS-COCO dataset with DDIM sampler.\\n| |5 Steps|10 Steps|\\n|-|-|-|\\n|SD v1.5 | 32.95 | 18.99|\\n|SD v1.5-AT |25.82|15.65|\\n\\nAs can be seen, our method still has better performance, compared with the baseline method. \\n\\n**Q2**: The authors only use FID score as the evaluation metric, while it is easy to evaluate the results using other metrics such as IS, sFID, precision, recall, as done in the ADM paper. Why these metrics are not included?\\n\\n**A2**: \\nThanks for your suggestion. We list the IS, sFID, Precision, and Recall of CIFAR10 32x32 with DDIM sampler as a representation as below.\\nThe results of our ADM-AT also outperform the baseline across metrics overall.\\nMore results of various samplers and datasets can be found in Appendix F.4 of the revised paper.\\n\\nTable 5. Comparison of sFID $\\\\downarrow$ and IS $\\\\uparrow$ on $\\\\texttt{CIFAR10}$ 32x32.\\n|NFE-metric |5-sFID|5-IS|8-sFID|8-IS|10-sFID|10-IS|20-sFID|20-IS|50-sFID|50-IS|\\n|-|-|-|-|-|-|-|-|-|-|-|\\n|ADM | 12.75 | 7.76 | 8.53 | 8.62 | 8.39 | 8.70 | 6.19 | 9.08 | 4.99 | 9.19|\\n|ADM-AT | **12.56** | **7.97** | **7.93** | **8.90** | **7.08** | **8.90** | **5.37** | **9.17** | **4.66** | **9.51**|\\n\\nTable 6. Comparison of Precision (P) $\\\\uparrow$ and Recall (R) $\\\\uparrow$ on $\\\\texttt{CIFAR10}$ 32x32.\\n|NFE-metric |5-Precision|5-Recall|8-Precision|8-Recall|10-Precision|10-Recall|20-Precision|20-Recall|50-Precision|50-Recall|\\n|-|-|-|-|-|-|-|-|-|-|-|\\n|ADM | 0.57 | **0.47** | 0.59 | 0.52 | 0.61 | 0.52 | 0.64 | 0.52 | 0.63 | 0.60|\\n|ADM-AT | **0.59** | 0.46 | **0.62** | 0.52| **0.63** | **0.54** | **0.65** | **0.58** | **0.66** | **0.61**|\"}",
"{\"summary\": \"This paper points out the distribution mismatching problem in traditional training of diffusion-based models (DPM) and proposes to conduct efficient adversarial training (AT) during the training of DPM to mitigate this problem. Theoretical analysis is strong enough to support its argument and experiments also verify the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The motivation for mitigating distribution mismatching is clear and important for efficient sampling.\\n\\n2. This paper provides strong theoretical support for implementing adversarial training to correct distribution mismatching, making this method convincing.\", \"weaknesses\": \"1. The experimental results may not be enough, for example, for Table 1 and Table 2, more NFEs should also be verified, although this method can improve efficient sampling, whether is adaptable and robust for more denoising steps should also be verified.\\n\\n2. Some complex derivations in supplementary material are too brief to understand, such as Eq(30) and Eq(59-62), I'm not sure if there are any typos in them, I suggest checking the equations carefully and modifying them.\", \"questions\": \"1. As the weakness above, for Table 1 and Table 2, more NFEs should also be verified.\\n\\n2. Why not also try generation using consistency models on benchmark datasets such as CIFAR10 and ImageNet, which can be more common and convincing?\\n\\n3. Derivations in supplementary material should be checked carefully and written with more details. \\n\\n4. Why efficient AT can improve performance compared with PGD is a bit confusing. Intuitively, PGD should be more accurate to find $\\\\delta_t$, thus more deep insights should be provided here.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Thanks for your valuable comments and suggestions. Here we address your concerns as follows.\\n\\n**Q1**: There is no qualitative comparisons. It would be better to put some images for more direct comparisons. \\n\\n**A1**: \\nThanks for your suggestions. The revised paper adds the qualitative comparisons between ADM-AT with baseline ADM/ADM-IP on CIFAR10 32x32 ( Figure 4)and ImageNet 64x64 (Figure 5). We also add comparisons between LCM-AT and LCM (Figures 6 and 7).\\nOverall, models trained with our proposed AT method demonstrate superior performance in generating images for both class-conditional image generation and text-to-image generation tasks.\\nOur AT method generates more realistic and higher-fidelity samples, which is consistent with the results in Tables 1, 2, and 3 in our paper.\\n\\n**Q2**: In addition, the code is not provided.\\n\\n**A2**: We will release our code, but currently the code for the consistency model and ImageNet 64x64 is still polished. We provide the diffusion model adversarial training code on CIFAR10 32x32 in the following anonymous link: [code](https://anonymfile.com/89rDY/adm-at.zip).\\n\\n**Q2**: The efficiency comparison with AT and the classical denoising objective. \\n\\n**A2**: Yes, as you mentioned, solving the proposed adversarial objective (14) with standard adversarial training technique (e.g., PGD in [1]) is computationally expensive, as we have explored in Table 8 of Appendix G.1. However, as mentioned in Section 6.1, **we refer to efficient adversarial training [2] algorithm (Algorithm 1) to resolve this**. By doing so, every update of the model under the proposed (14) has a **similar computational cost** to the classical objective. For example, we evaluate 10K updates of the model under methods: ADM/ADM-IP/ADM-AT(Ours). The spending time is 960/961s/970s, respectively. \\n\\nReference\\n\\n[1] Towards deep learning models resistant to adversarial attacks. Madry et al., 2018.\\n\\n[2] Adversarial training for free! Shafahi et al., 2019.\"}",
"{\"comment\": \"I thank the authors for their detailed response. I will maintain the positive rating for this paper.\"}",
"{\"comment\": \"I thank the authors for the rebuttal. The additional experiments resolve most of my concerns, thus I am raising my score to 6. I tend to not raising more because I think the result improvement is very marginal, thus it is unclear whether the proposed method would have practical value.\"}",
"{\"title\": \"Response Part 2\", \"comment\": \"**Q4**: Some statements are not clearly presented. For instance, the description of ADM is not given.\\n\\n**A4**: For the description of ADM, it is a UNet-type [2] (with self-attention layer) neural network proposed by [3]. It is a standard architecture for diffusion model under image generation task. We have added this description in the revised version. \\n\\n**Q5**: The norm $\\\\|\\\\cdot\\\\|$ notations are abused, should that be $\\\\ell_1, \\\\ell_2$ or $\\\\ell_\\\\infty$.\\n\\n**A5**: Without specifying, we use $\\\\|\\\\cdot\\\\|$ to denote $\\\\ell_{2}$-norm. We have added this description in line 137 of the revised version. \\n\\n\\n\\n**Q5**. Some ablation studies for different perturbation levels $\\\\alpha$ should be given.\\n\\n**A5**: \\nWe explore the proposed under different perturbation level $\\\\alpha$ on $\\\\texttt{CIFAR10}$ 32x32 and $\\\\texttt{ImageNet}$ 64x64 below. IDDPM is adopted as the inference sampler.\\n\\nTable 2. Comparison of different adversarial learning rate \\u03b1 on $\\\\texttt{CIFAR10}$ 32x32.\\n| $\\\\alpha$ $\\\\backslash$ NFEs|5 | 8 |10 | 20 | 50|\\n|-|-|-|-|-|-|\\n|$\\\\alpha=0.05$ | 51.72 | 32.09 | 25.48 | 10.38 | 4.36 |\\n|$\\\\alpha=0.1$ | **37.15** | **23.59** | **15.88** | **6.60** | **3.34** |\\n|$\\\\alpha=0.5$ | 63.73 | 40.08 | 27.57 | 7.23 | 3.42 |\\n\\nTable 3. Comparison of different adversarial learning rate $\\\\alpha$ of our AT framrwork on $\\\\texttt{ImageNet}$ 64x64.\\n| $\\\\alpha$ $\\\\backslash$ NFEs|5 | 8 |10 | 20 | 50|\\n|-|-|-|-|-|-|\\n|$\\\\alpha=0.1$ | 56.92 | 27.39 | 24.06 | 10.17 | 5.82|\\n|$\\\\alpha=0.5$ | **45.65** | **23.79** | **19.18** | **8.28** | **4.01**|\\n|$\\\\alpha=0.8$ | 46.92 | 28.46 | 22.47 | 9.70 | 4.25|\\n\\nWe observe that $\\\\alpha = 0.1$ is better on $\\\\texttt{CIFAR10}$ 32x32 and $\\\\alpha=0.5$ is better for $\\\\texttt{ImageNet}$ 64x64. We observe that image in larger size corresponds to larger optimal perturbation level $\\\\alpha$. We speculate this is because we use the perturbation measured under $\\\\ell_{2}$-norm, where the $\\\\ell_{2}$-norm of vector will increase with its dimension. We have added this ablation study in Appendix G.3 of the revised paper, please check it. \\n\\n\\n**Q6**: Some discussions about different perturbation methods ($\\\\ell_1, \\\\ell_2$ or $\\\\ell_\\\\infty$) should be discussed.\\n\\n**A6**: These three perturbations are actually mathematically equivalent e.g., for vector $x\\\\in \\\\mathbb{R}^{d}$, it holds $\\\\|x\\\\|_{\\\\infty}\\\\leq \\\\|x\\\\|_{2} \\\\leq \\\\sqrt{d}\\\\|x\\\\|_{\\\\infty}$. Therefore, we select $\\\\|\\\\cdot\\\\|_{2}$ as representation in our paper. To further address your concern, we explore our method under three different perturbation methods ($\\\\ell_1, \\\\ell_2$ or $\\\\ell_\\\\infty$). The results are summarized below. \\n\\nTable 4. Comparison of different perturbation norms on $\\\\texttt{CIFAR10}$ 32x32.\\n|Perturbation Norm |IDDPM-50|DDIM-50|ES-20|DPM-Solver-10|\\n|-|-|-|-|-|\\n|$l_1$|4.45|4.91|4.72|5.05|\\n|$l_2$|**3.34**|**3.07**|**4.36**|**4.81**|\\n|$l_{\\\\infty}$|3.87|3.63|4.48|5.32|\\n\\nDuring our experiments, we found that our method under $\\\\ell_2$-perturbation is more stable and indeed has better performance, thus we suggest to use $\\\\ell_2$-perturbation as in our paper. This discussion has been added in the Appendix G.3 of the revised paper. \\n\\n\\n\\nReference\\n\\n[1] Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions. Chen et al., 2023.\\n\\n[2] Convolutional Networks for Biomedical Image Segmentation. Ronneberger et al., 2015.\\n\\n[3] Diffusion Models Beat GANs on Image Synthesis. Dhariwal et al., 2021.\"}",
"{\"title\": \"Response Part 1\", \"comment\": \"Thanks for your valuable comments and suggestions. Here we address your concerns as follows.\\n\\n**Q1**: In general, the algorithm developed in this paper is motivated by the distribution mismatch along the diffusion path. However, there is no experimental results to justify the motivation, there are also no experimental results to **verify that the DRO framework can indeed help mitigate the distribution mismatch problem**.\\n\\n**A1**: Thanks for your valuable comment, we follow your suggestion to verify the quality of the intermediate $x_{t}$, as the mismatching problem is revealed by this. Concretely, we evaluate the FID between the generated $x_{t}$ by our and baseline methods, and compare them with the ground-truth ones. \\nWe adopt the IDDPM sampler under 200 NFEs and report the last 10 steps FID-10K in Table 1. We can observe that the ADM-AT achieves better FID on these steps. \\n\\nTable 1. Comparison of FID of intermediate $x_t$.\\n| Step Index|200|199 |198|197|196|195|194|193|192|191|190|\\n|-|-|-|-|-|-|-|-|-|-|-|-|\\n|ADM|4.94|5.34|11.63|19.61|27.60|35.23|42.37|49.08|55.33|61.09|66.80|\\n|ADM-IP|5.23|5.62|12.04|19.69|27.48|34.95|41.89|48.43|54.60 | 60.44|66.04|\\n|ADM-AT|**4.52**|**5.00**|**11.37**|**18.90**|**25.38**|**32.01**|**39.83**|**46.05**|**51.87**|**57.44**|**62.73**|\\n\\nNote that the Step 200 is the endpoint of the sampling process. \\n\\n**Q2**: Proposition 2 has already been discovered in existing theoretical papers [1], see their section 3.1. The authors should comment on this point around Proposition 2. \\n\\n**A2**: Thank you for pointing out the important reference. The results in section 3.1 are similar to our Proposition 2, as both of us quantify the KL divergence between the generated samples and the ground-truth ones. However, their results focus on the generated target data $x_{0}$, while ours focuses on all $x_{t}$ ($0\\\\leq t\\\\leq T$), though our techniques are similar. We have mentioned this comparison in the revised version. \\n\\n**Q3**: The advantage of ADM-AT is not that significant compared with the ADM method, a more detailed ablation study or theoretical analysis on using adversarial noise or random Gaussian noise should be added. \\n\\n**A3**:\\nOur ADM-AT significantly outperforms the baselines ADM or ADM-IP across various samplers, especially under the setting of fewer NFEs (more practical and efficient). As shown in Table 1 of our paper: on CIFAR-10 32x32, with the IDDPM sampler, ADM-AT improves FID from **10.52 to 6.60**. With the DDIM sampler, ADM-AT improves FID from **11.66 to 9.30** under 10 NFEs. With the DPM-Solver, ADM-AT improves FID from **8.00 to 5.84**. These improvements are recognized as significant and practical. \\n\\nFirstly, the proposed adversarial noise is induced by our theoretical framework under Distributional Robust Optimization to avoid distribution mismatching. As for the comparison with Gaussian noise, the baseline method ADM-IP adds Gaussian noise during training. According to our empirical results, our proposed adversarial noise is significantly better than the Gaussian noise.\"}",
"{\"summary\": \"This paper identifies the distribution mismatch problem in the training and sampling processes. Consequently, they propose a distributionally robust optimization procedure in the training to bridge the gap. The authors apply the method to both diffusion models and the consistent model, and demonstrate the effectiveness of the proposed method on several benchmarks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Identifying and formulating the distribution mismatch problem in diffusion model is an important problem in practice.\\n2. The proposed solution is elegant, supported by sufficient theoretical analysis. The derivations of the solution is clear and sound.\\n3. The writing is fairly clear.\", \"weaknesses\": \"My main concern on this paper is the evaluation. Currently the proposed method is only evaluated using the ADM model. I wonder whether the effectiveness on more advanced model such as the stable diffusion still holds?\\n\\nFurthermore, the authors only use FID score as the evaluation metric, while it is easy to evaluate the results using other metrics such as IS, sFID, precision, recall, as done in the ADM paper. Why these metrics are not included?\", \"questions\": \"The paper is a good one in general. I like how the problem is formulated and how the solution is derived. However, given the current evaluation (see the weakness), I am not fully convinced the proposed method is an effective way to deal with the problem. I would like to see how the authors respond to my concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1DIdt2YOPw | Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations | [
"Christian Tomani",
"Kamalika Chaudhuri",
"Ivan Evtimov",
"Daniel Cremers",
"Mark Ibrahim"
] | A major barrier to the practical deployment of large language models (LLMs) is their lack of reliability. Three situations where this is particularly apparent are correctness, hallucinations when given unanswerable questions, and safety where responses are harmful or offensive. In all three cases, models should ideally abstain from responding---much like humans refrain from answering questions when uncertain. Inspired by analogous approaches in classification, this study explores the feasibility and efficacy of LLMs abstaining when uncertain in the domain of question-answering. We investigate two kinds of uncertainties, statistical uncertainty metrics and a distinct verbalized measure, termed as In Dialogue Uncertainty (InDU), measuring hedge words such as `I don't know' in responses. Using these uncertainty measures combined with models with and without reinforcement learning with human feedback (RLHF), we show in all three situations, abstention based on the right kind of uncertainty measure can boost the reliability of LLMs. By abstaining for a few highly uncertain samples we improve correctness by up to 8\%, avoid 50\% of hallucinations by correctly identifying unanswerable questions, and in particular increase safety by 70-99\% with almost no additional computational overhead. | [
"LLMs",
"uncertainty",
"abstention",
"correctness",
"hallucinations",
"safety"
] | Reject | https://openreview.net/pdf?id=1DIdt2YOPw | https://openreview.net/forum?id=1DIdt2YOPw | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"tWvYrF7EPx",
"WIrrKpYaFC",
"TMYFz9vPTQ",
"OTIAEJJtIL",
"FTIIPWMwDa",
"AZ6MQtNhUb",
"6iGzXb3TF5"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"decision",
"official_review",
"meta_review"
],
"note_created": [
1730529438459,
1730587444702,
1730528641839,
1730761659884,
1737524263238,
1730789754408,
1735064749527
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13491/Reviewer_NpjB"
],
[
"ICLR.cc/2025/Conference/Submission13491/Reviewer_KHqs"
],
[
"ICLR.cc/2025/Conference/Submission13491/Reviewer_5sjZ"
],
[
"ICLR.cc/2025/Conference/Submission13491/Reviewer_XzXp"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13491/Reviewer_SxZN"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"The authors investigate the potential of uncertainty-based abstention to improve performance in question-answering tasks, specifically focusing on correctness, hallucinations, and safety scenarios. They analyze two types of uncertainty\\u2014statistical uncertainty and in-dialogue uncertainty\\u2014and examine the effects of RLHF on these uncertainties.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation behind this work is compelling and relevant for the deployment and real-world application of LLMs. However, while the authors highlight the benefits of combining RLHF with uncertainty to enhance performance, reviewers suggest that additional validation experiments, particularly in the areas of hallucination and safety, would strengthen the claims.\\n2. The paper underscores the importance of uncertainty for abstention, demonstrating that incorporating uncertainty can improve various aspects of model performance.\", \"weaknesses\": \"1. Although the reviewers appreciate the study's motivation, they raise concerns regarding the experimental setup. For instance, in the hallucination settings, tests are only conducted on the SelfAware dataset. It would be beneficial to include additional datasets to more comprehensively evaluate the method's effectiveness in reducing hallucinations, especially given that current approaches primarily rely on Retrieval-Augmented Generation (RAG) [1].\\n\\n2. In the safety setting, the reviewers are interested in seeing how the uncertainty mechanism performs across a broader range of evaluation datasets. For example, PKU-SafeRLHF [3] provides safe, decoupled preferences and red-teaming prompts; how does the proposed approach perform on safety measures in these rigorous evaluations via case by case gpt-4 evaluation?\\n\\n3. The reviewers are not fully convinced by the claim that \\\"our experiments demonstrate that RLHF fine-tuning not only aligns the model with safety but also enhances its uncertainty awareness in relation to safety.\\\" RLHF alone does not guarantee model safety, particularly when the preference data distribution is uncertain. For instance, the GPT-4 technical report highlights that while RLHF helps align model responses with user intent, models may still exhibit brittle or undesired behaviors on both safe and unsafe inputs, especially when labeler instructions during reward model data collection are underspecified. Reviewers suggest that the authors provide a more detailed discussion on this aspect and include comparisons with models specifically designed for safety alignment, such as RLCD [2] and Safe RLHF [3].\\n\\n4. Regarding evaluation, the authors rely primarily on statistical measures, such as keyword-based approaches. However, this static evaluation method may fall short of detecting nuanced harmful responses, such as those involving emotional abuse. Additionally, Llama Guard\\u2019s performance drops in non-OOD (Out-of-Distribution) scenarios. Reviewers recommend including case-by-case GPT-4 evaluations to directly assess the safety of two responses, providing a more granular safety evaluation.\\n\\n[1] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection \\n[2] RLCD: Reinforcement Learning from Contrastive Distillation for Language Model Alignment \\n[3] PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"This paper utilizes human evaluations to conduct a safety assessment of the model's outputs. The authors state, \\\"We validate the effectiveness of fuzzy exact match by comparing it with human evaluations on 200 samples each from TriviaQA and SciQA.\\\" However, details regarding the background and diversity of these 200 individuals remain unclear, as well as whether these evaluations comply with IRB requirements.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper focuses on abstention and uncertainty in LLMs, benchmarking how useful different uncertainty estimates are across three broad tasks.\\nThese tasks are correctness, unanswerable questions, and safety. \\nCorrectness is evaluated against standard QA data (TriviaQA, SciQA, CoQA, StrategyQA, and GSM8K).\\nUnanswerable vs answerable questions are sourced from the SelfAware dataset and SciQA. \\nAdversarial examples are sourced from AttaQ and AutoDAN. \\nThe authors examine negative log-likelihood, predictive entropy, semantic entropy, and In-Dialogue Uncertainty, which is the number of hedge tokens present in the output. \\nAll experiments were run on Llama2.\\nAcross different tasks, the authors find that different uncertainty estimates lead to better or worse calibration, with no one method consistently outperforming the others. \\nThe authors show that thresholding uncertainty scores can lead to better correctness, safety, and less hallucination on unanswerable questions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Well written and clearly organized**: the paper is easy to follow, the writing is clear, and the questions being tested are clear.\", \"**In-dialogue uncertainty metric is new**: As far as I can tell, past work has not proposed counting the number of hedge words as a method of confidence estimation.\", \"**Sufficient datasets examined**: The authors do a good job of testing multiple datasets to make their point.\"], \"weaknesses\": [\"**Limited novelty:** The novelty of the paper is pretty limited. From the abstract/intro, it seems like the main contribution of the paper is in showing that abstention based on uncertainty can improve results. This result is not new (see next point about missing related work). Moreover, the primary methodological novelty in this work is In-Dialogue Uncertainty, which is a fairly small contribution and does not consistently provide benefits in all settings. The Discussion presents a more nuanced view of the contribution (i.e. framing this paper as a survey of confidence estimation methods and showing that there isn't one method that consistently does well.) This framing would have been more novel but then I would have expected to see more different uncertainty estimation methods tested.\", \"**Missing related work:** This paper misses a large chunk of the related work on abstention and confidence estimation from the last 2 years, focusing on older work. Examples:\", \"https://arxiv.org/pdf/2407.18418\", \"https://arxiv.org/abs/2308.13387\", \"https://arxiv.org/abs/2311.09677\", \"https://arxiv.org/abs/2404.00474\", \"https://arxiv.org/abs/2405.21028\", \"https://arxiv.org/abs/2401.06730\", \"https://aclanthology.org/2024.naacl-long.301/\", \"**Outdated models**: It's not clear why the authors only conduct experiments on Llama2, when there are many newer and more performant models available (even in the same family). To make a strong claim about when different estimation methods work and don't work, I would have expected to see more open-source models tested.\", \"**No unified method**: one way this paper could have been made more compelling is if it presented a unified estimation method/recipe that worked well across settings. Currently, the paper does not have any such unified method.\"], \"questions\": [\"It would be worth discussing the tradeoff between abstention and usability further.\", \"In-Dialogue Uncertainty is given an acronym but the acronym isn't used. It's also misspelled on L053.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes uncertainty-based methods for abstaining from providing incorrect, hallucinated, or unsafe answers. It considers probabilistic uncertainty (log likelihood, entropy, and semantic uncertainty) as well as verbal uncertainty. Experiments with Llama2 models across various question-answering and adversarial-prompting benchmarks demonstrate that (1) the considered uncertainty measures contain information about whether an answer is incorrect, hallucinated, or unsafe, and (2) abstention based on these measures is effective.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Studied topic is timely and important.\", \"Paper outlines multiple applications of the proposed method.\", \"Paper is well-written and easy to follow.\", \"Experiments test diverse uncertainty measures\", \"Experiments consider different model sizes and variants.\"], \"weaknesses\": [\"There exist more sophisticated approaches to these problems but they are not compared empirically [1, 2]. [1] build classifiers based on hidden representations, and [2] constructs ensembles using LLM prompting. Authors mention the weaknesses of prompting and fine-tuning in related work but do not demonstrate them through experiments. In fact, the experiments do not seem to concern distribution shift so there is no reason not to compare with those methods.\", \"Related work is missing some recent work showing similar results (e.g., [1,2,3]).\", \"Experiment sections mostly discuss observations but does not attempt to explain the observed phenemena.\", \"Some parts of the experiment section are unclear or can be further improved. Specifically, in figure 3, \\\"statistical uncertainty\\\" should be replaced with a specific measure and model (e.g., entropy). It is also missing model names. The plots need to have baseline curves to clearly illustrate improvements.\", \"[1] https://arxiv.org/abs/2304.13734\", \"[2] https://arxiv.org/abs/2402.00367\", \"[3] https://arxiv.org/abs/2402.13213\"], \"questions\": \"\\\"we recommend that practitioners differentiate between different types of questions before they decide whether to use statistical uncertainty or verbalized uncertainty for abstention...\\\" Could you explain why the experiment you conducted supports this claim?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"Paper shows that abstention based on different measures of uncertainty for different types of prompts works well. Specifically, for correctness and safety, statistical uncertainty-based abstention helps improve correctness and reduce unsafe responses. For hallucinations, abstention based on in-dialogue uncertainty (coined by authors as the inclusion of phrases such as \\\"I don't know\\\" in model responses) helps reduce hallucinations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Well-written paper with a clear walkthrough over the different problems and uncertainty metric considerations\", \"Interesting idea of using in-dialogue uncertainty as a measure of response uncertainty\", \"Clear description of experiments, metrics, and results; strong scientific method\"], \"weaknesses\": [\"It should not come as a surprise that using uncertainty metrics helps LLMs abstain when they should not engage with the prompt, as shown in Kadavath (2022) and multiple other papers cited in the related works. The core contributions of this paper can be boiled down to the introduction InDU (which also was inspired by an existing paper by Islam (2020)) and when to use each kind of uncertainty, both of which seem more fitting for, e.g., an appendix in Kadavath's paper, especially since this reads more like a survey paper of implementation details than novel ideas or concepts\", \"Minor: various typos such as \\\"In-Dialogoe\\\" in Introduction, Islam et al. without year in 3.2\"], \"questions\": [\"How can we practically account for all possible hedge words for every use case? Some prompts might even require responses to include hedge words; seems like a lot of finetuning and engineering effort to incorporate this uncertainty metric\", \"I'm not sure I agree with hallucinations being only considered for unanswerable questions. LLMs definitely hallucinate in other situations. How extendable are these findings?\", \"Statistical uncertainty metrics perform at more or less the same level. What should the reader take away from all these results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The paper studies how token-level and semantic uncertainty metrics on generated LLM text relate to accuracy on knowledge-intensive tasks, hallucination on unanswerable questions, and response safety on adversarial/malicious question datasets. Among the uncertainty metrics explored is one based on on counting hedge words in model responses. Experiments show that these uncertainty metrics are useful for model abstention in order to improve correctness of model generations, reduce hallucination, and increase response safety (at the cost of an increased abstention rate). Experiments are conducted across many relevant datasets using Llama 2 models.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Important: The core idea of the paper is sensible, relating uncertainty metrics to abstention in order to improve factuality and safety of model responses.\", \"Important: The experiment design is sound and the chosen metrics are reasonable. The experiments include many relevant datasets for measuring knowledge, hallucination, and safety.\", \"Of some importance: The paper is fairly well-written and easy to follow.\"], \"weaknesses\": [\"Very important: The novelty of the work is quite limited in my view. The high level conclusion that uncertainty is useful for abstention has already been thoroughly explored. What I see as this paper\\u2019s contributions beyond this observation are: (1) measuring in-dialogue uncertainty can help with abstention, specifically for factuality/hallucination; (2) uncertainty can help with safety, as it turns out that responses on AutoDAN-like datasets are more likely to be unsafe if they are uncertain. I don\\u2019t think the paper claims much beyond this. So a further issue with the novelty here is that (1) has already been shown, more or less, in https://arxiv.org/abs/2405.21028. The (2) result is interesting but I do not think it is a large enough result for a full paper, and it is not explored in much depth beyond one paragraph in this paper.\", \"Important: The measurement of in-dialogue uncertainty, even if useful, is a heuristic that does not feel particularly generalizable, especially compared to other model-based measurements of in-dialogue confidence.\"], \"questions\": [\"To be clear, in these experiments, the model might not actually abstain, right? You would have to calculate these metrics and then hide the response from the user if it were deemed unacceptable, right?\", \"It was hard to tell if there was a proposed training or inference method from reading the intro. It took me a while to realize that this was more of an analysis paper, showing how these metrics could be used for filtering model outputs.\", \"Sec. 3 probably doesn\\u2019t need to take up as much space is it currently does (people should know what NLL is), but at the same time it could give more understanding into the metrics (computing actual entropy over samples is hard, so you compute predictive entropy).\", \"L.377 typo \\u201cunanswerable vs. unanswerable\\u201d\", \"L.471 \\u201cabstaining to answer\\u201d\\u2192 \\u201cabstaining from answering\\u201d\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"PC is entering meta-review on behalf of SAC/AC:\\n\\nThe reviewers did not believe that this paper was a strong contribution given the limited novelty of the work, and lack of anchoring in the field.\", \"additional_comments_on_reviewer_discussion\": \"TBD\"}"
]
} |
1DEHVMDBaO | Adaptive Memory Mechanism in Vision Transformer for Long-form Video Understanding | [
"Zhenshun Liu",
"Zijian Lei",
"Kejing Yin",
"William K. Cheung"
] | In long-form video understanding, selecting an optimal Temporal Receptive Field (TRF) is crucial for Vision Transformer (ViT) models due to the dynamic nature of diverse video motion contents, which varies in duration and velocity. A short TRF can result in loss of critical information, while a long TRF may decrease ViT's performance and computational efficiency caused by the unrelated contents in videos and the quadratic complexity of the attention mechanism. To tackle this issue, we introduce Adaptive Memory Mechanism (AMM) that enables ViT to adjust its TRF dynamically in response to the video's dynamic contents. Instead of discarding Key-Value (KV) Cache from the earliest inference when the settings limit is reached, our approach uses a Memory Bank (MB) to retain the most important embeddings from the Key-Value Cache that would otherwise be discarded in memory-augmented methods. The selection is based on the attention score calculated between the Class Token (CLS) in current iteration and the KV Cache in previous iterations. We demonstrate that Adaptive Memory Vision Transformer (AMViT) outperforms existing methods across a diverse array of tasks (action recognition, action anticipation, and action detection). | [
"Key-Value Cache",
"Vision Transformer",
"Video Understanding"
] | Reject | https://openreview.net/pdf?id=1DEHVMDBaO | https://openreview.net/forum?id=1DEHVMDBaO | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zYJBP74GmE",
"wBhvW2qFA7",
"uCDsjgVSwE",
"tQ94WYr4EC",
"k7dY8pLoPd",
"hf8qwF2zeA",
"d3TAGcxGAk",
"ZSER8rY4IQ",
"Y4iuUhMfvv",
"U76BLn9thL",
"RRmUjG3gIe",
"QREonxAVOD",
"IbuSqP35fX",
"GvAwFtv4jy",
"G80p29JL23",
"G7YuYOK4gh",
"DXSbDlHKRH",
"DAqCtvkRAU",
"605yvNt4hJ",
"0QVPVCI1Zt"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1732542638320,
1729721965702,
1730720321195,
1737523537299,
1733215041561,
1732559499699,
1730644416065,
1732542320652,
1732542275066,
1733200598250,
1734487324032,
1732542446064,
1732542595818,
1733135072333,
1733201706280,
1733199730346,
1732542053904,
1732542506794,
1730696005681,
1730697220907
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_j6eW"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_gHkC"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_j6eW"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_a8AG"
],
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_TRqz"
],
[
"ICLR.cc/2025/Conference/Submission2872/Area_Chair_dQNk"
],
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_gHkC"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_TRqz"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_TRqz"
],
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_CdCg"
],
[
"ICLR.cc/2025/Conference/Submission2872/Reviewer_TRqz"
]
],
"structured_content_str": [
"{\"title\": \"Answer for Reviewer j6eW\", \"comment\": \"Thanks for the suggestions.\\n\\n**Limited Benchmarks and Baselines:** As suggested, we extended our performance comparison by including a number of additional SoTA methods. As tabulated in Tables 1-4 in the supplementary materials for the extra experiments comments, the proposed AMViT outperforms all the SoTA methods, which further demonstrates the effectiveness of AMViT. In particular, we included another benchmark, Diving48, which is a comprehensive dataset specifically designed for fine-grained action recognition in diving videos. This dataset provides a diverse range of diving actions and is widely used to evaluate the performance of models in capturing intricate motion details.\\n\\n**Average Input Duration:** The benchmarks we used are long videos, where the average input duration is comparable to or longer than one of the suggested benchmark LongVideoBench.\\n| Dataset | Average Duration (s) |\\n| ------------- |:-----------------:|\\n| LONGVIDEOBENCH | 473 |\\n| AVA | 900 |\\n| Epic-kitchen-100 | 6120 |\\n| Diving48 | 378 |\\n\\nWe will elaborate this point in the experiment section as suggested.\", \"writing_and_figures\": \"We agree the presentation clarity of the paper should be further enhanced.\"}",
"{\"summary\": \"This paper introduces an Adaptive Memory Mechanism (AMM) to improve Vision Transformers (ViT) for long-form video understanding. AMM dynamically adjusts the Temporal Receptive Field (TRF) based on video content, overcoming limitations of fixed TRF approaches that either lose key information or increase computational costs. Experiments show that AMViT, integrating AMM, outperforms existing models like MeMViT in tasks such as action recognition, anticipation, and detection, while reducing computational overhead, validated on datasets like AVA and Epic-Kitchens.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Long-form video understanding is an important task, and efficiency is indeed a crucial metric in this context.\\n\\n2. The proposed method can reduce both training and inference costs.\\n\\n3. Introducing a memory bank to handle long sequence inputs is intuitive and reasonable.\", \"weaknesses\": \"1. (important) The number of benchmarks (only 2) and baselines (also only 2) compared seems somewhat limited. Adding more experiments would make the paper more convincing.\\n\\n2. (important) Although the authors emphasize that the new architecture is designed for long-form video, this aspect is not discussed in the experimental section. Are the benchmarks presented in the paper truly for long videos, and what is the average input length? It would have been better if the authors had conducted more detailed evaluations on benchmarks like MovieChat-1K [1] or LongVideoBench [2].\\n\\n3. The writing and figures in the paper need improvement, especially regarding the notation for memory. There are too many subscripts and superscripts, along with the extensive use of qkv notations, which made it take me three times longer to understand the entire paper. \\n\\n[1] Song, Enxin, et al. \\\"Moviechat: From dense token to sparse memory for long video understanding.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[2] Wu, Haoning, et al. \\\"Longvideobench: A benchmark for long-context interleaved video-language understanding.\\\" arXiv preprint arXiv:2407.15754 (2024).\", \"questions\": \"Please revise the Weaknesses section point by point. This is a paper with great potential. If the authors can provide additional responses to certain issues, discuss related work more thoroughly, and include more experiments and observations, I would be very happy to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper addresses a solution for better long-form video understanding using a method named Adaptive Memory Mechanism (AMM). This method enables the Vision Transformer (ViT) to adjust its temporal receptive field dynamically depending on the input video. A memory bank is utilized to save the most important Key-Value when temporally processing the videos. The proposed method is tested on AVA and Epic-Kitchens datasets for action detection, recognition, and anticipation tasks. Experiment results show performance improvement to the ViT baselines without additional cost.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The method have better performance than baselines without additional cost.\", \"weaknesses\": \"1. The paper lacks SoTA comparisons. Is the task different from common action recognition and action detection? Multiple methods such as VideoMAE, Omnivore, or MMT have been tested on these datasets. It would be helpful if the authors could explain the difference between previous SoTAs with the proposed method, for example in parameter count or GFLOP difference.\\n2. The improvement to ViT and MeMVit baselines is marginal.\\n3. There is no difference in the FLOPs and Param(M) numbers compared to the baselines. Can the authors explain further the efficiency advantage achieved by the proposed method?\", \"questions\": \"1. Will there be a significant performance difference if the model is not pre-trained with UMT?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Sorry for the missing annotation, we have updated it.\"}",
"{\"comment\": \"Thanks for your response. I've take a look at other reviewer's comments and also MemViT paper again. It seems to me AMViT lack a part of efficiency analysis, which is the important part MemViT wanna present (for example fig1 and 3 in MemViT paper). And the paper presentation also much stronger than proposed AMViT. Therefore, I decide to keep my rating 5 now.\"}",
"{\"summary\": \"The paper proposes an Adaptive Memory Mechanism (AMM) for Vision Transformer (ViT) in long-form video understanding. It addresses the issue of selecting an optimal Temporal Receptive Field by allowing ViT to adjust TRF dynamically. Instead of directly discarding early Key-Value cache, AMM uses a Memory Bank to retain important embeddings from the Key-Value cache based on attention scores. Experiments on AVA and Epic-Kitchens show the advantages of AMM in action recognition, anticipation, and detection tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.Long-form video understanding is an important research task, and the author has provided a reasonable solution.\\n\\n2.The paper is well-written, making it easy to read.\", \"weaknesses\": \"1.The novelty of memory bank is limited. Many studies have explored how to utilize memory to retain important historical information and how to dynamically update memory. For example, Xmem[1] prioritizes retaining the most frequently used candidates. MA-LLM[2] and MovieChat[3] merge the two most similar candidates based on similarity once the memory bank capacity is exceeded. The innovations and advantages of the memory bank proposed in this paper compared to these methods are unclear.\\n\\n2.The fairness of the experiment is in question. When comparing with the baseline model MeMViT, the authors replaced the backbone of MeMViT from MViT to UMT. This seems to have led to a decline in the performance of the baseline model. For example, in the EPIC-KITCHEN-100 action recognition task, the performance reported in the original paper on MeMViT was 48.4%, while the performance presented in this paper is 43.03%. The authors should maintain the same settings as MeMViT for the experiments to make the results more credible.\\n\\n3.The performance improvement is limited. Compared to the baseline model MeMViT, the performance improvement is less than 1% in all experiments.\\n\\n4.Lacks of comparison with the latest methods. This article only presents comparisons with ViT and MeMViT. Some recent methods are missing, such as MAT[4] and MC-ViT[5].\\n\\n5.Lacks of necessary ablation studies. (2) This paper uses an input-aware selective module to prevent redundant embeddings from being retained, and uses a memory bank to retain useful embeddings. However, there are no ablation experiments to demonstrate the effectiveness of these two components individually. (2) The lack of ablation experiments on the memory bank update method. For example, comparing the update of the memory bank using attention score of class tokens proposed in this paper with previous methods (see weakness 1) and First-In-First-Out (FIFO).\\n\\n[1] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model, ECCV 2022\\n[2] MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding, CVPR 2024\\n[3] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding, CVPR2024\\n[4] Memory-and-Anticipation Transformer for Online Action Understanding, ICCV 2023\\n[5] Memory Consolidation Enables Long-Context Video Understanding, arxiv 2024\", \"questions\": \"When comparing with MeMViT, your model uses the memory bank and the selected Q-V cache, while MeMViT only uses Q-V cache. Have you ensured that the number of embeddings in both model is consistent? Specifically, does the size of the memory bank plus the size of the selected Q-V cache match the size of the unselected Q-V cache?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Module Ablation\", \"comment\": \"Table 5 Ablation study based on the Epic-Kitchen dataset and the evaluation task of action recognition. In the experiment, a 12-layer ViT-B is adopted for all the methods. Settings for each transformer are reported. Note that the number of compressed KV pairs obtained via CNN per video segment is 197. The accuracy reached the maximum when the selection size is 60 where the total number of augmented compressed KV pairs is 170 as compared to 394 for MeMViT.\\n| Model | Memory length (m) | Total # of compressed <br> KV pairs (m*197) | Memory bank size (L) | Selection size (k) |Total # of selected <br> KV pairs (m*k)|Total # of augmented <br> KV pairs (L+m*k) | Acc. |\\n|---------|------------|--------------|-------------|--------------|-------------|--------------------|--------|\\n| ViT-B | 0 | N/A | N/A | N/A | N/A |N/A | 37.12 |\\n| MeMViT-B | 1 | 197 | N/A | N/A | 197 |197 | 37.96 |\\n| MeMViT-B | 2 | 394 | N/A | N/A | 394 |394 | 37.87 |\\n| AMViT-B (no Memory Bank) | 2 | 394 | 0 | 50 | 100 |100 | 38.12 |\\n| AMViT-B | 1 | 197 | 50 | 50 | 50 |100 | 38.18 |\\n| AMViT-B | 2 | 394 | 50 | 30 | 60 |110 | 38.43 |\\n| AMViT-B | 2 | 394 | 50 | 50 | 100 |150 | 38.45 |\\n| AMViT-B | 2 | 394 | 50 | 60 | 120 |170 | 38.60 |\\n| AMViT-B | 2 | 394 | 50 | 70 | 140 |190 | 38.51 |\"}",
"{\"title\": \"SoTA Comparison\", \"comment\": \"As suggested by all the reviewers, we have added back the SOTA methods suggested by the reviewers and those we further identified to provide a more comprehensive performance comparison. Most of them are the results reported in their original papers, and some are reproduced by us for fair comparison. (* represent the results that are reproduced by ourselves.)\", \"table_1\": \"Performance Comparison. (Dataset: Epic-kitchen-100; Evaluation Task: Action Anticipation)\\n| Model | Pre-train Dataset |Top-1 Acc (%) |\\n| ------------- |:-----------------:|:-------------:|\\n| RU-LSTM [1] | IN-1K |13.3 |\\n| AVT [2] | IN-1K |13.6 |\\n| DCR [3] | IN-1K |14.6 |\\n| TeSTra[4] | IN-1K |17.0 |\\n| MAT [5] | IN-1K |18.8 |\\n| ViT-B* | K710 |19.3 |\\n| MeMViT-B* | K710 |19.5 |\\n| AMViT-B(Our) | K710 |19.8 |\\n| AMViT-L(Our) | K710 |22.6 |\", \"table_2\": \"Performance Comparison. (Dataset: Epic-kitchen-100; Evaluation Task: Action Recognition)\\n| Model | Pre-train Dataset |Top-1 Acc (%) |\\n| ------------- |:-----------------:|:-------------:|\\n| TSN [6] | IN-1K |20.54 |\\n| LSTA [7] | IN-1K |30.33 |\\n| VNMCE [8] | IN-1K |29.00 |\\n| RU-LSTM [1] | IN-1K |33.06 |\\n| ViT-B* | K710 |37.12 |\\n| MeMViT-B* | K710 |37.87 |\\n| AMViT-B(Our) | K710 |38.60 |\\n| AMViT-L(Our) | K710 |43.15 |\", \"table_3\": \"Performance Comparison. (Dataset: AVA; Evaluation Task: Action Detection)\\n| Model | Pre-train Dataset |Param(M)|mAP (%) |\\n| ------------- |:-----------------:|:------:|:------------:|\\n| SlowFast[9] | K600 |59 |27.5 |\\n| X3D-XL [10] | K600 |11 |27.4 |\\n| MViT [11] | K700 |51 |31.8 |\\n| ViT-B* | K710 |86 |28.59 |\\n| MeMViT-B* | K710 |86 |29.17 |\\n| AMViT-B(Our) | K710 |86 |30.07 |\\n| AMViT-L(Our) | K710 | 307 |35.98 |\", \"table_4\": \"Performance Comparison. (Dataset: Diving48 [14]; Evaluation Task: Action Recognition)\\n| Model |Param(M)|Top-1 Acc (%)) |\\n| ------------- |:------:|:------------:|\\n|TimeSformer [12]|121 |74.9 |\\n| MC-ViT [11]* |86 |74.5 |\\n| ViT-B* |86 |75.1 |\\n| MeMViT-B* |86 |76.2 |\\n| AMViT-B(Our) |86 |77.6 |\\n\\n**Reference:**\\n[1] What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. [ICCV\\u20192019]\\n[2] Anticipative Video Transformer. [ICCV\\u20192021]\\n[3] Learning to Anticipate Future with Dynamic Context Removal. [CVPR\\u20192022]\\n[4] Real-time Online Video Detection with Temporal Smoothing Transformers. [ECCV\\u20192022]\\n[5] Memory-and-Anticipation Transformer for Online Action Understanding. [ICCV\\u20192023]\\n[6] Temporal Segment Networks for Action Recognition in Videos. [TPAMI\\u20192019]\\n[7] LSTA: Long Short-Term Attention for Egocentric Action Recognition. [CVPR\\u20192019]\\n[8] Leveraging Uncertainty to Rethink Loss Functions and Evaluation Measures for Egocentric Action Anticipation. [ECCV\\u20192018]\\n[9] SlowFast Networks for Video Recognition. [ICCV\\u20192019]\\n[10] X3D: Expanding Architectures for Efficient Video Recognition. [CVPR\\u20192019]\\n[11] Improved Multiscale Vision Transformers for Classification and Detection. [CVPR\\u20192022]\\n[12] Is Space-time Attention All You Need for Video Understanding? [ICML\\u20192021]\\n[13] Memory Consolidation Enables Long-Context Video Understanding. [ICML\\u20192024]\\n[14] RESOUND: Towards Action Recognition without Representation Bias. [ECCV\\u20192018]\"}",
"{\"title\": \"Complexity comparison\", \"comment\": \"For different variants here, it might be better to also include a complexity or run-time comparison.\"}",
"{\"metareview\": \"(a) Scientific Claims and Findings\\n\\nThe paper introduces an Adaptive Memory Mechanism (AMM) for Vision Transformers (ViT) aimed at improving long-form video understanding. AMM allows ViT to dynamically adjust its Temporal Receptive Field (TRF) based on video content, utilizing a memory bank to retain important Key-Value pairs. The method is evaluated on AVA and Epic-Kitchens datasets, demonstrating performance improvements over ViT baselines without additional computational cost.\\n\\n(b) Strengths\\n\\nReviewer gHkC highlights that the method improves performance over baselines without extra computational cost. TRqz and j6eW note that the approach is relevant to the important task of long-form video understanding and is efficient in reducing both training and inference costs. CdCg and a8AG appreciate the clarity and readability of the paper, with CdCg also noting the simplicity and intuitiveness of the method.\\n\\n(c) Weaknesses\\n\\nA major concern, as pointed out by reviewers gHkC, TRqz, and a8AG, is the lack of comparisons with state-of-the-art methods, which raises questions about the method's relative performance. TRqz and CdCg also mention that the performance improvements are marginal. The experiments are limited to two datasets, as noted by CdCg and j6eW, and the paper lacks ablation studies to validate the contributions of individual modules, as highlighted by TRqz and a8AG. Additionally, a8AG questions the novelty of the memory bank, and there are concerns about the fairness of the experimental setup, particularly in comparisons with MeMViT.\\n\\n(d) Decision Reasons \\n\\nThe AC aligns with the reject recommendation of all reviewers. The decision to reject the paper is primarily due to the lack of comprehensive comparisons with state-of-the-art methods and limited experimental validation, as noted by reviewers gHkC, TRqz, and j6eW. The marginal performance gains over existing methods, as mentioned by TRqz and CdCg, do not provide sufficient justification for acceptance. Furthermore, the paper's contributions are not deemed novel or significant enough compared to existing work, as pointed out by a8AG. Overall, while the paper addresses an important problem, it requires more robust experiments, comparisons, and analyses to strengthen its contributions.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors addressed several concerns raised by the reviewers, leading to some adjustments in their evaluations, though not enough to change the overall decision.\\n\\nReviewer gHkC appreciated the authors' efforts in clarifying the state-of-the-art comparisons and adding additional ablation studies. However, they felt that the paper still lacked a significant advantage over the closely related MeMViT. Despite these improvements, the reviewer decided to adjust their score to 5, indicating a marginal improvement in their perception of the paper.\\n\\nReviewer TRqz acknowledged that some of their concerns were addressed in the rebuttal, particularly after considering feedback from other reviewers. However, they maintained that the improvements over the MeMViT baseline were limited in both novelty and performance gains. Consequently, they adjusted their rating to a 5, but still considered the paper to fall below the acceptance threshold.\\n\\nReviewer j6eW noted that the rebuttal did not sufficiently address the lack of efficiency analysis, which was a key aspect of the MeMViT paper. They also found the presentation of the AMViT paper to be weaker compared to MeMViT. As a result, they decided to keep their rating at 5.\\n\\nIn weighing these points for the final decision, the primary considerations were the limited novelty and marginal performance improvements over existing methods, as well as the lack of comprehensive efficiency analysis. Despite the authors' efforts to address the reviewers' concerns, the paper did not convince them towards accepting the paper.\"}",
"{\"title\": \"Answer for Reviewer TRqz\", \"comment\": \"Thank you for the comments.\\n\\n**Memory Bank Size of MeMViT and AMViT:** The proposed AMViT introduces an Input-aware Selective Module (ISM) to retain the important compressed KV cache and an additional Memory Bank to store the cache that would otherwise be dropped in MeMViT. Therefore, there is no Memory Bank in MeMViT. Only the KV cache of the video segment immediately preceding the current segment is considered in MeMViT. For fair comparisons, we kept all modules (compressed KV cache, backbone, etc.) the same in our experiments for both MeMViT and AMViT, with the only differences being the addition of the ISM and the memory bank in AMViT.\\n\\n**Ablations in ISM and Memory Bank:** We added the suggested ablation study. According to Table 5 under the supplementary experiments comments, AMViT without ISM and Memory Bank noticeably would degenerate to MeMViT. Even though AMViT needs an extra Memory Bank to store the important compressed KV cache, the total number of augmented tokens is reduced because the ISM module further shrinks the compressed KV cache via the selection.\\n\\n**Effective Temporal Receptive Field:** Please check Figure 4 in the paper, it shows that the memory bank keeps more tokens near recent iterations for the \\\"Take Pot\\\" action than for the \\\"Wash Pot\\\" action. This suggests the Memory Bank focuses more on recent iterations for \\\"Take Pot.\\\" This aligns with Figure 1, which indicates the pot appears for a shorter duration during \\\"Take Pot\\\" than \\\"Wash Pot.\\\" The different retained tokens based on different actions indicate that AMViT would have a larger temporal receptive field than MeMViT when it is needed for particular actions.\\n\\n**Parameter Setting and MeMViT Performance based on Original Backbone:** As explained, there is no Memory Bank in MeMViT. The sign \\u201c*\\u201d in Table 6 is to indicate that. We will add a footnote accordingly. Also, as suggested, we tried to reproduce the results of MeMViT using the backbone adopted in the original paper.\\n\\n| | AVA (action detection) mAP(%) | Epic-kitchen (action recognition) top-1 acc(%) | Epic-kitchen (action anticipation) top-1 acc(%) |\\n|------------|-------------|------------|------------------|\\n| Results in original paper | 29.3 | 46.2| 15.1 |\\n| Our reproduced results| 29.17| 38.45 | 19.5|\\n\\nFor the AVA dataset, our reproduced results are consistent, while there is a significant discrepancy in the Epic-kitchen dataset. As MeMViT does not provide the code for training the model on Epic-kitchen, we utilize the code from RULSTM [1], which may be the reason for the disparity. Despite this, our reproduced results remain SoTA according to their evaluation methodology. Details can be found in Tables 1 and 2 under the supplementary experiments comment.\\n\\n**FLOPs and # Parameters:** (i) We apologize for the typos in Table 1 of the submitted manuscript, which should be the same as in Tables 2 and 3. (ii) With reference to the ViT backbone, the additional FLOPs and the number of parameters due to the memory-augmented methods are just incremental and insignificant, which corresponds to the additional CNN for compressing the KV cache. AMViT does not intend to reduce the model size but tries to make more effective use of the augmented memory to achieve dynamic temporal receptive fields, and thus better performance.\\n\\n**State-of-the-Art Comparisons:** As suggested, we extended our performance comparison by including a number of additional SoTA methods. As tabulated in Tables 1-4 of the additional experimental results we provided, the proposed AMViT outperforms all the SoTA methods, which further demonstrates the effectiveness of AMViT.\\n\\n**References:**\\n[1] What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention. [ICCV\\u20192019]\"}",
"{\"title\": \"Answer for Reviewer a8AG\", \"comment\": \"Thanks for the comments.\\n\\n**Comparing AMViT with Xmen [15], MC-ViT [13], MA-LLM [16], MovieChat [17] and MAT [5]:**\\nXMem [2] uses the memory mechanism at the image level for video object segmentation. Though the high-level idea is the same, the proposed AMViT proposes a novel memory mechanism at the image-patch level, and the application is for action recognition and detection, where one of the crucial steps is to identify the relevant salient features.\\n\\nMC-ViT [3] uses a memory consolidation mechanism to capture longer context. The proposed AMViT utilizes the proposed input-aware selection mechanism so that more relevant context over time can be more effectively captured For fair comparisons, we evaluated MC-ViT using the same backbone of AMViT and reported the result in Table 4. It is noted that both MemViT and AMViT can achieve better performance than MC-ViT.\\n\\nMA-LLM [4] & MovieChat [5] make use of memory compression based on the adjacent similarity to implement long-term memory. AMViT instead makes use of an input-aware selection mechanism to adaptively maintain the memory bank.\\n\\nMAT [6] uses a memory-anticipation-based paradigm to model the temporal structure of past, present, and future. AMViT uses the adaptive memory mechanism instead. We also carried out additional performance comparisons, as shown in Table 1. We find that AMViT can outperform MAT for the action anticipation task based on the Epic-kitchen-100 dataset.\\n\\n**About Reproduced Results for MeMViT:** MeMViT does not provide the code for training the model on Epic-kitchen. We utilize the code from RULSTM [1], which may be the reason for the disparity. For the AVA dataset, our reproduced results are consistent with those reported in the original paper. Despite this, our reproduced results remain SoTA according to their evaluation methodology. Detailed information can be found in Tables 1 and 2 of the supplementary experiments comment.\\n\\nIn addition, we also tried to reproduce the results of MeMViT by adopting the backbone adopted in the original paper.\\n| | AVA (action detection) mAP(%) | Epic-kitchen (action recognition) top-1 acc(%) | Epic-kitchen (action anticipation) top-1 acc(%) |\\n|---------|-----------|-------------------|---------------------------|\\n| Results in original paper | 29.3 | 46.2| 15.1 |\\n| Our reproduced results| 29.17| 38.45 | 19.5|\\n\\n**Marginal Improvement:** Although our method does not show a very significant improvement, it achieves better results requiring less augmented KV pairs. Furthermore, we tested our method on another benchmark, Diving48, where it achieved a 1.4% improvement over MeMViT. For more details, please refer to Table 4 in the supplementary experiments section.\\n\\n**Latest Methods Comparisons:** As suggested, we extended our performance comparison by including a number of additional SoTA methods. As tabulated in Tables 1-4 in the supplementary materials for the extra experiments comments, the proposed AMViT outperforms all the SoTA methods, which further demonstrates the effectiveness of AMViT.\\n\\n**Ablations in ISM and Memory Bank:** We added the suggested ablation study. According to Table 5 in Additional Experimental Results, we show that adding the ISM can enhance the performance. Adding also the Memory Bank can further enhance the performance. AMViT without ISM and Memory Bank degenerates back to MeMViT. Note that even though AMViT needs an extra Memory Bank to store the important compressed KV cache pairs, the total number of augmented tokens is reduced because the ISM module further shrinks the compressed KV cache pairs via the selection.\\n\\n**KV Cache Consistency:** In our experiments, we kept all modules (compressed KV cache, backbone, etc.) the same, with the only differences being the addition of the ISM and the memory bank in AMViT. Please see Table 5 in the additional experimental results.\\n\\n**References:**\\n[1] What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention. [ICCV\\u20192019]\\n[2] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model. [ECCV\\u20192022]\\n[3] Memory Consolidation Enables Long-Context Video Understanding. [ICML\\u20192024]\\n[4] MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding. [CVPR\\u20192024]\\n[5] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding. [CVPR\\u20192024]\\n[6] Memory-and-Anticipation Transformer for Online Action Understanding. [ICCV\\u20192023]\"}",
"{\"comment\": \"Thank you for clarifying the SoTA comparison and the difference between the proposed method and other related works. I highly appreciate the effort of adding the SoTA and additional ablation studies performed by the authors to address the reviewers' concerns, however, I think the paper still lacks a significant advantage over the closely related MeM-ViT. Nevertheless, I decided to change my score to 5.\"}",
"{\"comment\": \"Thank the authors for their response. After reviewing the rebuttal and the feedback from other reviewers, I feel some of my concerns have been addressed. However, the improvements over the MeMViT baseline remain limited in both novelty and performance gains. Based on this, I am adjusting my rating to a 5, but I still think it falls below the acceptance threshold.\"}",
"{\"title\": \"Which rows are reproduced by authors?\", \"comment\": [\"Although the authors mentioned \\\"* represent the results that are reproduced by ourselves\\\", it seems no rows have this mark?\"]}",
"{\"title\": \"Answer for Reviewer gHkC\", \"comment\": \"Thanks for the suggestions.\\n\\n**State-of-the-Art Comparison:** As suggested, we extended our performance comparison by including a number of additional SoTA methods. As tabulated in Tables 1-4 of the additional experimental results we provided, the proposed AMViT outperforms all the SoTA methods, which further demonstrates the effectiveness of AMViT.\\n\\n**Action Recognition and Action Detection Tasks:** We test on the same Action Recognition and Action Detection tasks as in other related works like VideoMAE and Omnivore. Yet, the way how the proposed AMViT trains the ViT is different from methods like VideoMAE and Omnivore which process a limited number of frames at a time and thus focus on the action occurrence duration within the video. In contrast, our method processes the video stream continuously fed to the model, starting from the beginning, to perform action recognition or detection tasks. We will clarify this point in the paper.\\n\\n**AMViT vs. \\u201cVideoMAE, Omnivore, UMT\\u201d:** Methods like VideoMAE, Omnivore, and UMT are designed for understanding short-form videos, and they cannot scale if directly applied to long-from videos. The quadratic increase in complexity will be resulted as the length of the video segment increases due to the use of the Transformer architecture. Memory-augmented methods like MeMViT are introduced to address the issue by augmenting compressed versions of the embeddings of the preceding segments. The proposed AMViT further extends the idea to capture longer-range dependency with an adaptive memory augmentation mechanism. It is to be noted that the advantage gained by the proposed adaptive memory mechanism is orthogonal to how we train the ViT like VideoMAE, Omnivore, and UMT.\\n\\n**Marginal Improvement:** Although our method does not show a very significant improvement, it achieves better results while reducing the number of augmented KV pairs. Furthermore, we tested our method on another benchmark, Diving48, where it achieved a 1.4% improvement over MeMViT. For more details, please refer to Table 4 in the supplementary experiments section.\\n\\n\\n**FLOPs and # Parameters:** With reference to the ViT backbone, the additional FLOPs and the number of parameters due to the memory-augmented methods are often insignificant, mainly due to the additional CNN required to compress the KV cache. Yet AMViT can make more effective use of the augmented memory to achieve dynamic temporal receptive fields, and thus better performance. Note: There are typos in Table 1 of the submitted manuscript. The FLOPs and the number of parameters should be:\\nFLOPs(M): ViT-B(202.22), MeMViT(202.28), AMViT(202.28) &\\nParameters(M): ViT(85.64), MeMViT(85.66), AMViT(85.66).\\n\\n**Using Other Pre-trained Models:** While the choice of the pre-trained model shall affect the overall performance, the contribution of AMViT is how to get the most from a particular pre-trained model by achieving adaptive temporal receptive fields. Different backbones, due to how they are trained, should have different performance. In principle, applying the proposed adaptive memory mechanism can further improve their performance.\"}",
"{\"title\": \"Answer for Reviewer CdCg\", \"comment\": \"Thanks for the comments.\\n\\n**State-of-the-Art Comparisons:** As suggested, we extended our performance comparison by including a number of additional SoTA methods. As tabulated in Tables 1-4 in the supplementary materials for the extra experiments comments, the proposed AMViT outperforms all the SoTA methods, which further demonstrates the effectiveness of AMViT.\\n\\n**Marginal Improvement:** Although our method does not show a very significant improvement, it achieves better results requiring less augmented KV pairs. Furthermore, we tested our method on another benchmark, Diving48, where it achieved a 1.4% improvement over MeMViT. For more details, please refer to Table 4 in the supplementary experiments section.\\n\\n**Limitation of Adaptive Selection:** It is true that if the range of dependency is long, the SOTA memory-augmented methods, such as SWIM [1], MeMViT [2], and MC-ViT [3], as well as the proposed AMViT may have the irrelevant content discarded. While AMViT cannot eliminate the possibility, it tries to alleviate this limitation by maintaining an adaptive temporal receptive field. The situations mentioned by the reviewer could indeed happen, and more robust memory mechanisms which can effectively handle that are worth further investigation.\\n\\n**Gradient:** The KV Cache does not retain the gradient.\\n\\n**Vision-language Extension:** Our methods, being memory-augmented, can potentially be integrated into language models. MC-ViT [3] has already been explored to demonstrate the possibility of integration of augmented methods with language models, indicating the feasibility of such an extension. This will be our future work.\\n\\n**References:**\\n[1] Video Swin Transformer. [CVPR\\u20192022]\\n[2] MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition. [CVPR\\u20192022]\\n[3] Memory Consolidation Enables Long-Context Video Understanding. [ICML\\u20192024]\"}",
"{\"summary\": \"This paper aims to enhance ViT for long-term video understanding. The authors design a memory bank to store historical information and develop input-aware adaptive memory selection to retrieve the relevant information to assist long-term analysis. The experiments show that the architecture demonstrates satisfactory performance with high efficiency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The analysis of the limited temporal receptive field in long-term video understanding makes sense, and the motivation is clear.\\n2. The method is simple and intuitive.\", \"weaknesses\": \"1. The experiments are limited. Only AVA and Epic-Kitchens are reported. Results on more video datasets are required to verify the effectiveness of the adaptive memory design. Besides, the performance improvements are marginal.\\n2. The memory bank is recurrently updated by adaptive selection. Is it possible that in a long video, the content in the middle of the video is not closely related to the beginning, and only relevant content appears towards the end? However, during the memory bank update process, the tokens of the earlier video content were already discarded.\", \"questions\": \"1. Does the KV Cache in this paper retain the gradient?\\n2. This paper focuses on a pure vision model with enhanced memory design. However, the ViT-only architecture is capable of a limited range of video-related tasks. Is it possible to integrate it with video-language models to achieve wider range of video tasks to exert more impact on the community?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents an adaptive memory method to improve the existing memory-augmented methods for long-form video understanding. The method is based on MeMViT but makes the memory bank adaptive to support the adaptive temporal receptive field. The experiments are conducted on Ava and Epic-Kitchens dataset with the comparison with ViT and MeMViT.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Long-form video understanding is an important video research topic and the idea of using an adaptive memory bank sounds reasonable and promising.\", \"Compared to MeMViT, the results show consistent improvements though some datasets only have marginal gain.\"], \"weaknesses\": [\"One of the main motivations of the paper is to retain embeddings instead of discarding memory when the memory limit is reached. However, based on the experiments, it's unclear if the effective receptive field of AMViT is indeed larger than MeMViT through the proposed adaptive memory module. Are they still using the same memory bank size?\", \"In the model section, the paper presents two new modules, including Input-aware selective module (ISM) and Adaptive Memory mechanism(AMM). However, there are no ablations to validate the individual effectiveness of these modules.\", \"How do we select parameters for MeMViT? Some parameters for MeMViT (Table 6) are not defined, e.g, memory bank size. Is it the same as AMViT? Given the authors are reproducing MeMViT with a different backbone, how the results compare to the original paper.\", \"In Table 1, it's unclear why all the three methods are having the same FLOPs and parameters given MeMviT and AMViT has additional memory bank modules. It's also better to conduct run-time comparison.\", \"The experiments are also missing a system-level comparison with the current SOTA results on the benchmarks.\"], \"questions\": \"Please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1DEEVAl5QX | Mini-batch Submodular Maximization | [
"Gregory Schwartzman"
] | We present the first *mini-batch* algorithm for maximizing a non-negative monotone *decomposable* submodular function, $F=\sum_{i=1}^N f^i$, under a set of constraints.
We consider two sampling approaches: uniform and weighted. We show that mini-batch with weighted sampling improves over the state of the art sparsifier based approach both in theory and in practice. Surprisingly, we experimentally observe that uniform sampling achieves superior results to weighted sampling. However, it is *impossible* to explain this using worst-case analysis. Our main contribution is using *smoothed analysis* to provide a theoretical foundation for our experimental results. We show that, under *very mild* assumptions, uniform sampling is superior for both the mini-batch and the sparsifier approaches. We empirically verify that these assumptions hold for our datasets. Uniform sampling is simple to implement and has complexity independent of $N$, making it the perfect candidate to tackle massive real-world datasets. | [
"smoothed analysis",
"submodular maximization"
] | Reject | https://openreview.net/pdf?id=1DEEVAl5QX | https://openreview.net/forum?id=1DEEVAl5QX | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yXRGfxLABG",
"vKoGugTFZM",
"pOyNi0J9yi",
"noWcsL4SNH",
"mmWprlX0ok",
"lTmRsUYIsQ",
"fJFGNjCiLb",
"eFnx1vUblJ",
"Tx4AvWtIcV",
"PHa4AnjKX8",
"GCSKQMyGPa",
"DucNkKEipE",
"32kO6vlwu2"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"decision"
],
"note_created": [
1731471142520,
1732333784481,
1730764051780,
1733278032638,
1731470714017,
1732334440970,
1731642499761,
1730659389058,
1730685393445,
1734746509068,
1731473182195,
1731940707313,
1737523685999
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5129/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5129/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5129/Reviewer_ARiB"
],
[
"ICLR.cc/2025/Conference/Submission5129/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5129/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5129/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5129/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5129/Reviewer_3AMz"
],
[
"ICLR.cc/2025/Conference/Submission5129/Reviewer_87eG"
],
[
"ICLR.cc/2025/Conference/Submission5129/Area_Chair_i4Qy"
],
[
"ICLR.cc/2025/Conference/Submission5129/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5129/Reviewer_3AMz"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for your review.\\n\\nAbout the figure, basically all algorithms achieve almost the same quality when we sample enough elements.\\nWe will move Section 4 before Section 3 if accepted.\"}",
"{\"comment\": \"Dear reviewer, the discussion period is ending soon, and we would really appreciate your response.\"}",
"{\"summary\": \"This paper considers maximization of decomposable monotone submodular functions over a ground set of size $n$, meaning that the objective function $f$ is a sum of $N$ monotone submodular functions $f_1,...,f_N$. If $N$ is large, then evaluations of $f$ may be computationally demanding. Previous work on the topic (Rafiey & Yoshida, 2022; Kenneth & Krauthgamer 2023) proposes constructing a random sparsified version of $f$ that is a weighted sum of some subset of the functions, and is within a multiplicative $\\\\epsilon$ factor approximation on all sets. A sparsifier such as those mentioned could be constructed as a preprocessing step for an algorithm, and then the algorithm would be run using the sparsifier in place of the original function. The state of the art is that of Kenneth & Krauthgamer, where a sparsifier of $O(k^2n\\\\epsilon^{-2})$ functions is constructed using $O(Nn)$ oracle calls. The sparsifier is constructed by iterating over the functions, computing a probability $p_i$ for each function $f_i$ to be included, and then sampling that function with probability $p_i$ (which takes a total of $O(Nn)$ queries). Then querying the sparsifier takes $O(k^2n\\\\epsilon^{-2})$ function evaluations, compared to $O(N)$ function evaluations to query the original $f$. If $N$ is relatively large, the sparsifier is more efficient.\\n\\nInstead of computing a sparsifier as a preprocessing step for an algorithm, this paper proposes a \\\"mini-batch method\\\" (which have been used in other areas of ML) for this problem (Algorithm 3). That is, a new sparsifier is sampled every iteration of the greedy algorithm. The approach in this paper uses the same sampling probabilities $p_i$ as Kenneth & Krauthgamer, and therefore still needs the $O(Nn)$ queries as a preprocessing step to compute the $p_i$. In order to prove some of the results in their paper, they make additional assumptions on the problem setting (Models 1 and 2). Several analyses are done on the number of function queries needed for their algorithm. Finally, they include an experimental comparison of their algorithm and related works.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Exploring submodular optimization algorithms that do not view the function $f$ as simply a black box is an interesting research direction that I think deserves attention.\", \"They explained their results clearly and the paper was easy to understand.\"], \"weaknesses\": [\"It seems a lot of the difficulty of these sparsification approaches is because the sampling of the $f_i$ is non-uniform, but it is still unclear to me that this is so much better than uniform sampling. According to this paper, uniform sampling does better in practice, and requires no preprocessing to compute the $p_i$ since they would be uniform. It is also stated that no theoretical bound can be gotten for uniform sampling. But if we assume that all the $f_i$ are bounded by some value $R$, why can't concentration inequalities be used to get a theoretical guarantee for the uniform approach?\", \"Some of the results are dependent on assuming Models 1 or 2 (see Table 1), but it isn't clear to me that these models are realistic for applications of the problem.\", \"Improvements over Kenneth and Krauthgamer mainly include the curvature of the function in the bound on the number of function queries, so the bounds are instance dependent.\", \"The bounded curvature results (which don't depend on Models 1 and 2) don't use ideas that are that novel compared to related work. It seems the biggest difference from Kenneth and Krauthgamer is computing the sparsifier at each round of the greedy algorithm, and only relatively minor changes are needed to the argument of Kenneth and Krauthgamer.\"], \"questions\": [\"If the $f_i$ are all bounded by a value $R$, could theoretical guarantees be gotten for uniform sampling?\", \"Do you expect Models 1 and 2 would hold widely in applications of decomposable functions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Summary\", \"comment\": \"Unfortunately, there was very limited interaction during the discussion period. Due to the public nature of the discussion, I'm adding this summary to clarify key points and avoid misunderstandings.\\n\\nThe reviews mostly lacked constructive feedback. The main critique focused on the simplicity of our analysis, which is actually a strength, and overlooked the core contribution: our uniform sampling mini-batch algorithm. This approach outperforms over other approaches both theoretically and practically. It is easy to implement and can be used for massive datasets.\\n\\nWhen considering potential impact, a good reference is Sculley's mini-batch k-means algorithm (https://dl.acm.org/doi/10.1145/1772690.1772862), which is extremely simple (just 2 pages) yet widely used in practice (e.g., sklearn).\"}",
"{\"title\": \"Our main contribution is the uniform alg + smoothed analysis.\", \"comment\": \"Thank you for your review.\\n\\nWe would like to emphasize that our main contribution is the introduction of the *uniform* sampling algorithm, observing that it outperforms other approaches empirically, and using smoothed analysis to bridge the gap between theory (no worst case analysis possible) and practice. We believe that this algorithm can be used as the first line of attack for many real-world massive datasets. The improved weighted sampling is \\\"nice to have\\\" and lays the groundwork\\u00a0for the smoothed analysis of the uniform sampling algorithm, but this is not our main contribution.\\n\\nWe address your questions below.\\u00a0\\n\\nQ1) An upper bound is not sufficient. This is because our proofs require a *multiplicative* error bound for the minibatch approach to work. Consider the following example: all functions except one are always zero ($f^i \\\\equiv 0, i\\\\neq j$), and one function, $f^j$, is upper bounded by 1. Clearly both the minibatch and the sparsifier algorithms can't optimize the sum as they will keep sampling functions that are always 0. The above example is unlikely to appear in real world applications, but it illustrates that worst-case analysis is simply not the right tool here. This is why we use smoothed analysis to explain the superior performance of uniform sampling in practice.\\n\\nQ2) Yes, specifically Model 2. The assumptions of Model 2 are *extremely* mild and we verify\\u00a0empirically that they hold for *all* of our datasets. We would like to emphasize that we only introduced smoothed analysis in this revision of the paper, while we used the same datasets in previous revisions. That is, we did not simply pick datasets where our models apply (and indeed Model 1 does not apply to all datasets).\"}",
"{\"comment\": \"Thank you for your time. In order to improve the paper for future submissions, may we ask what improvement in your opinion would make the paper cross the acceptance threshold?\"}",
"{\"comment\": \"Another point we would like to address:\\n\\\"It is unclear if ICLR is an appropriate venue for this work. The non-exhaustive list of topics in the Call for Papers includes \\\"optimization\\\", but submodular maximization in its raw form seems one hop away from the target areas of ICLR (deep learning)\\\"\\n\\nSubmodular optimization papers are often published in ICLR / Neurips / ICML. While it is true that usually there are only a few submissions dealing with the raw form of submodular optimization, they are accepted quite positively. See for example this submission for ICLR 2025 which received some very positive reviews - https://openreview.net/forum?id=EPHsIa0Ytg\"}",
"{\"summary\": \"This work studies a sampling-based algorithm for faster non-negative monotone *decomposable* submodular maximization subject to\\ncardinality or $p$-system constraints. In particular, it builds on work of\\n[Kenneth-Krauthgamer, ICALP 2024] (please update reference in paper), which sparsifies\\nand reweights the set of functions $f^{(i)}(S)$ for the input function $F(S) = \\\\sum_{i=1}^N f^{(i)}(S)$.\\nThe goal of this paper is to eliminate the dependence on $N$, which the authors do under mild assumptions\\nvia *smoothed analysis*. They also show that this is not possible in the general case with a simple pathological example.\\nIn short, the main idea is to sample a subset of $f^{(i)}(S)$ functions at each step to form a\\n\\\"mini-batch\\\" for approximating the full $F(S)$. The algorithm then greedily\\nselect the next element based on the sampled funciton (which changes in each iteration), not $F$ itself.\\n\\nFurther, under the mild realistic assumptions, they prove why uniform sampling is a competitive approach,\\nwhich helps explain initially surprising experimental observations.\\nLastly, this work provides a clean set of experiments comparing their mini-batch sampling-based methods to\\na full lazy greedy algorithm and the sparsification idea in [Kenneth-Krauthgamer, ICALP 2024].\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Uses smoothed analysis to more accurately study realistic inputs\", \"Table 1 cleanyl describes the results, including a comparison with [Kenneth-Krauthgamer, ICALP 2024]\", \"Draws connections to the lazier-than-lazy greedy algorithm of [Mirzasoleiman et al., AAAI 2015]\", \"and explains how the two ideas can be combined to reduce query complexity by a factor of $\\\\Theta(k)$\", \"Good comprehensive set of experiments for cardinality constraints, though the\", \"values of $k \\\\le 20$ are quite small. It would be nicer to increase $k$ to see\", \"how fast the different algorithms converge (relatively) to lazy greedy\"], \"weaknesses\": \"- The lunch menu optimization example, while a clear illustration, does not\\n really motivate the problem from a practioner's perspective\\n- There are no $p$-system experiments\\n- It is unclear if ICLR is an appropriate venue for this work. The\\n non-exhaustive list of topics in the Call for Papers includes \\\"optimization\\\",\\n but submodular maximization in its raw form seems one hop away from the\\n target areas of ICLR (deep learning)\", \"questions\": \"- In the introduction, you claim that \\\"in many of the above applications, $N$\\n (the number of underlying submodular functions) is extremely large, making the\\n evaluation of $F$ prohibitively slow.\\\" Are there realistic examples where $N\\n \\\\gg 1000$? It's not clear to me how often we really encounter $N$ *distinct*\\n personalized submodular functions.\\n- What exactly is the quantity $A_e$ when you first introduce it on page 3?\\n This should be made more clear. Initially, I thought it was a vector of all\\n marginal values, but then in model 1 you say it's a random variable.\\n- For the Uber pickups experiment, why do you use Llyod's algorithm to find\\n centers instead of a data-indepedndent grid?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper addresses the problem of maximizing a non-negative, monotone, decomposable submodular function under the cardinality constraint and $p$-system constraint. It introduces the first mini-batch algorithm with weighted sampling for this problem, demonstrating that it outperforms the sparsifier-based approach both theoretically and empirically. Additionally, the authors observe that, in experiments, uniform sampling outperforms weighted sampling. To explain this outcome, they define two smoothing models. The first model provides theoretical guarantees for both the mini-batch and sparsifier algorithms on some datasets, while the second model applies only to the mini-batch algorithm but is effective across all datasets tested.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"Overall, the paper is well-structured and easy to understand. The definitions and explanations are clear, and related work is discussed in sufficient detail.\\n\\nThe discussion on uniform and weighted sampling, along with the smoothing model, helps bridge the gap between theoretical results and the empirical performance of the algorithms. It provides insights into why an algorithm without a worst-case guarantee can still perform well in experiments.\", \"weaknesses\": \"The algorithm is simple, and the analysis is quite straightforward. The technical contribution is limited.\\n\\nWith 12 indistinguishable lines in Figure 1, it is hard to see which algorithm with $\\\\beta=10^{-2}$ achieves the best performance.\", \"questions\": \"It might be better to put Section 4 before Section 3 to ensure the continuity of the analysis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper studies the problem of maximizing a decomposable submodular function subject to constraints. The main contribution of the paper is a sampling-based algorithm that uniformly samples and reweighs a subset of the functions in order to create a much smaller sparsified instance that can be solved more efficiently. Under certain assumptions , the paper uses smoothed analysis to show that the uniform sampling approach improves upon existing sparsifier approaches.\\n\\nOne of the main strengths of this work is that it provides a theoretical justification for the uniform sampling approach which is a preferred approach in practice. Although this work makes a valuable contribution that is relevant to applications, there was consensus among the reviewers that the contribution is limited and it does not meet the threshold for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion, the authors clarified several potential misunderstandings regarding the algorithm and its analysis. The authors also addressed the reviewers' questions. The main concern raised was that the contribution is limited, which is the main factor in the decision.\"}",
"{\"comment\": \"Thank you for the review. We address your comments below.\", \"weaknesses\": \"- \\\"The lunch menu optimization example, while a clear illustration, does not really motivate the problem from a practioner's perspective\\\"\\n\\nAgreed, we chose this for simplicity. However, utility maximization is an extremely\\u00a0natural problem. Other examples for utility maximization include: adding medical services to a healthcare package as to maximize the welfare of all patients, adding features to a website to maximize user engagement, etc...\\n\\n- \\\"There are no p-system experiments\\\"\\n\\nIndeed, we couldn't find a real-world dataset for this problem. Previous papers seems to either be completely theoretical (no experiments), or run experiments just under a cardinality constraint.\", \"questions\": \"Q1) It is quite natural in welfare maximizations (e.g., many people with different preferences). Another example is finding a representative set of images (e.g., thumbnails for a video). Here N can be very large (the number of frames in the video), clearly there is plenty of redundancy, so our approach is very natural here.\\n\\nQ2) It is defined just above Model 1. It is the set $\\\\{f^i(e)\\\\}_{i\\\\in [N]}$ and in our models we assume that every $f^i(e)$ (the value of the i-th func on e) is a random variable, not the set $A_e$.\\n\\nQ3) We roughly followed the paper of Rafiey and Yoshida which introduced\\u00a0this set. They simply say that they select a set of \\\"popular pickup locations in the dataset\\\". We used k-means to pick \\\"popular locations\\\".\"}",
"{\"comment\": \"Thank you for your response. I have read through all the reviews and rebuttals, and will maintain my score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}"
]
} |
1D3TjFidCS | Logarithmic Linear Units (LogLUs): A Novel Activation Function for Improved Convergence in Deep Neural Networks | [
"Rishi Chaitanya Sri Prasad Nalluri",
"Prabakaran Ganeshan",
"Karthik Rajendran"
] | The Logarithmic Linear Unit (LogLU) presents a novel activation function for deep neural networks by incorporating logarithmic elements into its design, introducing non-linearity that significantly enhances both training efficiency and accuracy. LogLU effectively addresses common limitations associated with widely used activation functions include ReLU, Leaky ReLU, and ELU, which suffer from issues like the dead neuron problem and vanishing gradients. By enabling neurons to remain active with negative inputs and ensuring effective gradient flow during backpropagation, LogLU promotes more efficient convergence in gradient descent. Its capability to solve fundamental yet complex non-linear tasks, such as the XOR problem, with fewer neurons demonstrates its efficiency in capturing non-linear patterns. Extensive evaluations on benchmark datasets like Caltech 101 and Imagenette, using the InceptionV3 architecture, reveal that LogLU not only accelerates convergence but also enhances model performance compared to existing activation functions. These findings underscore LogLU's potential as an effective activation function that improves both model performance and faster convergence. | [
"Activation Function",
"Deep Neural Networks",
"Optimisation"
] | Reject | https://openreview.net/pdf?id=1D3TjFidCS | https://openreview.net/forum?id=1D3TjFidCS | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vKcmAIq1Zf",
"r2bTKSQUVY",
"quPrJHHx2S",
"Ix0bL8awJn",
"BIhlYv1Y4R",
"6iob23wS4P"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"meta_review",
"decision",
"official_review"
],
"note_created": [
1730033659107,
1730928364432,
1730104400819,
1734541581496,
1737524269701,
1730704218375
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13583/Reviewer_iz8k"
],
[
"ICLR.cc/2025/Conference/Submission13583/Reviewer_oiB4"
],
[
"ICLR.cc/2025/Conference/Submission13583/Reviewer_S5pY"
],
[
"ICLR.cc/2025/Conference/Submission13583/Area_Chair_LYPg"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13583/Reviewer_QGVY"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes to use $-\\\\log(-x+1)$ in ReLU instead of $0$ when the input is negative. The performance on fine-tuning InceptionV3 on Caltech101 and Imagenette is improved over ReLU, ELU, Leaky ReLU, Swish (SiLU) and Mish.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. I believe this direction of activation search is fundamentally impactful in deep learning, because it changes the basic part of neural networks. Though the experiments are very limited, it is already a good sign that this important part works better.\\n2. The paper's message is minimal, direct, and clear.\", \"weaknesses\": \"1. My major concern is that the experiments are restricted to very limited data and models, so LogLU's validity is still questionable on other models and tasks.\\n2. More specifically, the results would be convincing if the author could add experiments on common models, such as , ResNet, UNet, and Transformers. If LogLU works on more models I believe it will improve the paper. \\n3. Another solution that could help is to ask if it is possible to find a dataset or a toy model where LogLU significantly outperforms other activations.\\n3. The model has 73M parameters for Caltech 101 and 37M for Imagenette, both pre-trained on the Imagenet dataset. I don't understand why the models are both InceptionV3 but are different in size.\\n4. I don't understand why the experiments only include fine-tuning, but not training from scratch.\", \"questions\": \"1. How to justify the theoretical reason for using the log function, could you give any intuition?\\n2. Here are some thoughts to justify LogLU and address the theoretical side. $f=-\\\\log(-x+1)$ solves an instance of Monge-Amp\\u00e8re equation $$\\\\log\\\\det f''=2f$$ \\nwhere $\\\\det$ is the analog in the high-dimensional case, associated with Dirichlet boundary condition $\\\\lim_{x\\\\to\\\\partial \\\\Omega} f=\\\\infty$ on the domain $\\\\Omega=(-\\\\infty,1)$. We can alternatively set a Neumann boundary condition $f'(0)=1$ on $\\\\Omega=(-\\\\infty,0)$ to guarantee the $C^1$ continuity. The intuition is that the logarithmic curvature is proportional to the value. The property includes self-concordance and logarithmic homogeneity.\\nSee [1] in Chapter 2.3.3: properties; Chapter 2.5: universality---the log function as a canonical construction.\\nSee [2] in Proposition 1.4.3: a connection with the Calabi theorem.\\n\\n[1] Interior point polynomial time methods in convex programming. A. Nemirovski 2004.\\n\\n[2] Conic optimization: a\\ufb00ine geometry of self-concordant barriers and copositive cones. R. Hildebrand 2017.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work presents the new logarithmic linear unit (LogLU) activation function for deep neural networks.\\nThe LogLU activation solves the problem of vanishing gradient.\\nThis paper shows that LogLU outperformed the other activation functions considered.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper proposes a new activation function for deep neural networks. This is an important topic, considering the significant impact of activation function choice on deep neural network performance. The paper is clear and easy to follow.\", \"weaknesses\": \"This paper does not support some of its claims with enough evidence. For example: Under the abstract, we have: \\\"Its capability to solve fundamental yet complex non-linear tasks, such as the XOR problem, with fewer neurons demonstrates its efficiency in capturing non-linear patterns\\\". There is no evidence to support the claim that LogLU uses fewer neurons. You can strengthen this claim by providing the evidence to support this.\\n\\nUnder the conclusion, we have: \\\"The empirical results show that LogLU consistently outperforms traditional activation functions in terms of convergence speed, stability, accuracy, and loss reduction.\\\". The measure of stability is not clear in this paper. You can strengthen this by explaining how you observed the stability of the networks.\\n\\nThe experiments are limited and insufficient to conclude that LogLU is better than the other activation functions for deep neural networks. This paper did not address possible interaction with other components of a neural network (For example: dropout, learning rate, batch normalization, and so on). Please consider an ablation study that examines LogLU's interaction with other neural network components like dropout, batch normalization, etc.\\n This work only considered some image classification tasks. This is not representative enough to generalize over all deep neural networks. For example, consider other cases such as simple generative models, language-based tasks, and so on.\", \"questions\": \"Please check the weaknesses and respond to the comments. Here is a summary:\\n\\n(1). Address the unsupported claims in the paper.\\n\\n(2). Include more experimental results for ablation studies, more neural architectures, and more tasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes LogLU as a new activation function, which is both continuous and differentiable.\\nLogLU is empirically shown to be computationally more efficient compared to modern activation functions such as Swish or Mish, but requires slightly more computation than ReLU or Leaky ReLU.\\nThe authors claim that a simple one-hidden-layer MLP with LogLU activation can learn the XOR function.\\nLogLU is compared to other activation functions using the Caltech-101 and Imagenette (a simplified variant of ImageNet) datasets with the Inception-V3 architecture, demonstrating faster convergence of models with LogLU activation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed LogLU is very simple while being both continuous and differentiable. It requires less computation than modern activation functions such as Swish because it does not involve exponential computations in either the forward or backward pass, although it only requires logarithmic computation in the forward pass.\"], \"weaknesses\": [\"The authors claim that a simple MLP with LogLU activation can learn the XOR function, highlighting this as an advantage of using LogLU. However, MLPs with other activation functions are also capable of learning the XOR function. The authors should discuss why using LogLU is more advantageous than other activation functions in the context of the XOR example.\", \"The experimental evaluations are insufficient. At a minimum, it is necessary to compare the proposed activation function with other methods using network architectures beyond Inception-V3. Additionally, each experiment should be conducted with various random seeds to assess the variability of the outputs (loss or accuracy).\"], \"questions\": [\"On page 1, the manuscript states: \\\"Although Leaky ReLU addresses this problem by permitting small negative values, it introduces the vanishing gradient problem, limiting its effectiveness in deep networks (Maas, 2013).\\\" However, I believe that Leaky ReLU does not introduce the vanishing gradient problem. In fact, Leaky ReLU was proposed to mitigate issues like the dying ReLU problem by allowing a small, non-zero gradient for negative input values. Additionally, no such discussion regarding Leaky ReLU introducing vanishing gradients is found in Maas et al. (2013).\", \"On page 5, the manuscript states that Table 1 shows the derivative of LogLU, but Table 1 does not include this information. Please update Table 1 to include the derivative of LogLU or revise the manuscript to accurately reflect the contents of Table 1.\", \"On page 6, the term \\\"more controlled activations\\\" is ambiguous and requires clarification. The authors should provide a clear definition or explanation of what is meant by \\\"more controlled activations\\\" to enhance the reader's understanding.\", \"The lines in the figures are difficult to distinguish. Please use more distinct colors or linestyles to enhance clarity.\", \"On page 7, why are the model sizes different across datasets, even though Inception-V3 is used for both?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The submission proposes a novel activation function, with claims of improved convergence rate and stability over existing activation functions. Some experimental results are provided to support these claims. However, the reviewers were unanimous in their determination that the experiments were insufficient to fully support the claims.\\n\\nThere was a suggestion that larger architectures (e.g., transformers, resnets, etc) should be used in the experiments, but this was not a large factor in my decision. As pointed out by reviewer oiB4, some of the core claims of the paper were not substantiated by evidence, and there were questions about some of the significance of these claims. E.g., why do we care about XOR when all other activation functions can already deal with this using a very small number of hidden units?\", \"additional_comments_on_reviewer_discussion\": \"There was no discussion.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The paper introduces a new activation function, Logarithmic Linear Unit (LogLU), aimed at addressing issues inherent in widely used activation functions like ReLU, Leaky ReLU, ELU, etc. LogLU uses a logarithmic function for negative inputs, allowing it to maintain active gradients even with negative inputs, potentially reducing issues like dead neurons and vanishing gradients. Experiments are conducted comparing LogLU with other established activation functions across datasets like Caltech 101 and Imagenette using the InceptionV3 architecture. The authors highlight benefits in convergence speed and accuracy, proposing LogLU as a robust alternative for deep learning models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Innovation in activation functions: The proposal of LogLU as a hybrid activation function is novel and provides an interesting alternative to traditional activation functions. The logarithmic component for negative inputs introduces a unique way to handle the dead neuron problem while also limiting gradient vanishing, especially compared to ReLU and Leaky ReLU.\\n\\n2. Experiments performed: The authors performed evaluations on classification benchmark datasets (Caltech 101 and Imagenette) and used InceptionV3 architecture for the classification task. The consistent improvements in Val accuracy (on Caltech 101) and convergence speed were presented in Tables 3 and 4 and Figures 4 and 5, which suggest that LogLU might be a competitive alternative to existing activation functions.\\n\\n3.\\tPerformance on classification task: By demonstrating that LogLU can solve the XOR problem with a simplified architecture, the authors underscore LogLU\\u2019s efficiency in capturing non-linear relationships with fewer neurons, an advantage for both resource efficiency and model scalability.\\n\\n4.\\tAddressing gradient problems: The paper discusses how LogLU mitigates the vanishing and exploding gradient problems, which are common in deeper networks due to the use of traditional activation functions. LogLU\\u2019s bounded gradient across all input values is well-explained and experimentally supported, potentially making it an optimal choice for complex neural architectures.\\n\\n5.\\tEfficient computation: The paper also presents an analysis of computation times, demonstrating that LogLU is computationally efficient (Figure 2). LogLU achieves an average computation time significantly lower than other activation functions (except ReLu and Leaky ReLU), with performance that consistently outpaces more complex alternatives like Mish and Swish.\", \"weaknesses\": \"1.\\tLack of a rigorous study/analyses: Although the paper tries to solve an important problem in deep learning based training of CNNs in the presence of vanishing/exploding gradient problem, the work done in the current version of the paper appears to be very preliminary in nature and there is a huge scope for improvement.\\n\\n2.\\tComparison with more recent activation functions: While the paper covers popular functions like ReLU, ELU, and Swish, it could benefit from comparisons with other activation functions such as SiLU, GELU, Softplus or more recent alternatives like Parametric RSigELU (Kili\\u00e7arslan, et al, Feb 2024) and ErfReLU (Rajanand, et al, May 2024). Including such comparisons would provide a broader perspective on LogLU\\u2019s competitive positioning.\\n\\n3.\\tAccuracy on Imagenette dataset: There does not seem to be any significant gain in performance on the Imagenette dataset, where activations such as Swish and Mish marginally beat the proposed activation function. Therefore, the claims of better performance is not applicable on this dataset. \\n\\n4.\\tComputational Complexity Analysis: Although the authors claim computational efficiency, the complexity analysis could be strengthened. The time complexity is presented in aggregate form (average time over multiple runs), but there is limited discussion on LogLU's computational demands relative to exponential or polynomial components in activation functions like ELU or Mish, which could help enhance the claims of efficiency.\\n\\n5.\\tScalability to other deep CNNs and datasets: While the experiments are valuable, they focus primarily on moderately sized datasets for only image classification tasks. Testing LogLU on larger datasets, such as the MNIST, CIFAR10, COCO, CelebA, Pascal VOC, SVHN, etc., and using architectures beyond InceptionV3 (e.g., ResNet or transformer-based models) could provide deeper insights into LogLU\\u2019s applicability in large-scale settings.\\n\\n6.\\tScalability to loss functions beyond cross-entropy: Since the gradient computation depends on loss function, it would be highly valuable to assess the effectiveness of LogLU for different loss functions for the classification task. These directions were not explored in the current version of the work. \\n\\n7.\\tScalability to tasks beyond classification: The effectiveness of LogLU on other tasks such as image segmentation, object detection or image generation, etc. remains unexplored. The work could potentially benefit from showing superior performance/computational efficiency over other activation functions in a variety of other prominent computer vision tasks.\\n\\n8.\\tAblation Studies: The effectiveness of LogLU in specific neural network layers (e.g., convolutional layers vs. dense layers) or different learning rates and optimizers remains unexplored. Adding ablation studies could help isolate the benefits of LogLU more distinctly across various configurations.\", \"questions\": \"1.\\tSome inconsistent/undefined concepts? The loss function used in Section 3.2 seems to be binary cross entropy loss. While this might be obvious to some, the loss function was not defined prior to Section 3.2, which make the further discussion confusing. In Section 5, the authors talk about achieving greater \\u201cstability\\u201d with LogLU. Stability in what? This term in not (well-)defined in the paper.\\n\\n2.\\tLack of error analysis/multiseed runs: The work lacks any error analysis (no error bars in plots or tables) whatsoever. Moreover, all the loss/accuracy curves were evaluated for a single seed. Showing the robustness of LogLU in a multiseed setting will enhance the efficacy of the proposed approach. \\n\\n3.\\tExtend empirical comparison scope: Include additional activation functions, particularly the newer ones like Parametric RSigELU, ErfReLU, etc. to establish a more comprehensive benchmarking framework. Further, investigate LogLU\\u2019s performance on diverse and prominent architectures like DenseNet, ResNet, VGG, etc. to reinforce its general applicability.\\n\\n4.\\tDetailed computational complexity analysis: A more granular breakdown of the time complexity will enhance the results of the paper. It might be worth performing time-complexity analysis for images instead of multiple realizations of a large vector of fixed size. Test and report the computation time of LogLU within different network architectures (e.g., shallow networks, ResNet, VGG) and layer types (e.g., dense layers vs. convolutional layers). This analysis can reveal how the activation function\\u2019s computational demands vary with the network\\u2019s depth, type, and layer configuration, especially for architectures optimized for speed.\\n\\n5.\\tComparison to other methods for mitigating vanishing/exploding gradients issue: There are other successful and competitive methods for mitigating vanishing/exploding gradient problems at the architectural level such as the ResNet architecture. These tackle the gradient issue via architectural design using skip-connections and identity mapping to reformulate the CNN layers for learning residual functions, while specially engineered activation functions address it via their mathematical properties like non-saturating properties (LeakyReLU), gradient preservation (Swish, GELU) for negative inputs, incorporating learnable parameters (Parametric ReLU or PReLU) etc. While exploring architecture vs. activation function for solving gradient issue is out of the scope of this work (which focuses solely on activation functions), a detailed discussion highlighting other non-activation function based techniques for overcoming vanishing/exploding gradient problem will help with the completeness of the paper. \\n\\n6.\\tExamine gradient flow in various conditions: Explore gradient dynamics with respect to learning rate schedules and optimizers to provide insight into how LogLU performs under different training regimes. Additionally, ablation studies on placement within specific layers could clarify LogLU\\u2019s most impactful applications.\\n\\n7.\\tTheoretical insights on regularization effect: Since the logarithmic component potentially regularizes activations for negative inputs, discussing theoretical implications related to regularization could open new perspectives on the theoretical advantages of LogLU in avoiding overfitting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
1CeIRl147S | Domain-specific Benchmarking of Vision-Language Models: A Task Augmentation Framework Using Metadata | [
"Tim Rädsch",
"Leon Mayer",
"Simon Pavicic",
"Ali Emre Kavur",
"Marcel Knopp",
"Barış Öztürk",
"Klaus Maier-Hein",
"Paul F Jaeger",
"Fabian Isensee",
"Annika Reinke",
"Lena Maier-hein"
] | The reliable and objective evaluation of AI models is essential for measuring scientific progress and translating methods into practice. However, in the nascent field of multimodal foundation models, validation has proven to be even more complex and error-prone compared to the field of narrow, task-specific AI. One open question that has not received much attention is how to set up strong vision language model (VLM) benchmarks while sparing human annotation costs. This holds specifically for domain-specific foundation models designed to serve a predefined specific purpose (e.g. pathology, autonomous driving) for which performance on test data should translate into real-life success. Given this gap in the literature, our contribution is three-fold: (1) In analogy to the concept of data augmentation in traditional ML, we propose the concept of task augmentation - a resource-efficient method for creating multiple tasks from a single existing task using metadata annotations. To this end, we use three sources to enhance existing datasets with relevant metadata: human annotators (e.g. for annotating truncation), predefined rules (e.g. for converting instance segmentations to the number of objects), and existing models (e.g. depth models to compute which object is closer to the camera). (2) We apply our task augmentation concept to several domains represented by the well-known data sets COCO (e.g. kitchen, wildlife domain) and KITTI (autonomous driving domain) datasets to generate domain-specific VLM benchmarks with highly reliable reference data. As a unique feature compared to existing benchmarks, we quantify the ambiguity of the human answer for each task for each image by acquiring human answers from a total of six raters, contributing a total of 162,946 human baseline answers to the 37,171 tasks generated on 1,704 images. (3) Finally, we use our framework to benchmark a total of 21 open and frontier closed models. Our large-scale analysis suggests that (I) model performance varies across domains, (II) open models have narrowed the gap to closed models significantly, (III) the recently released Qwen2 72B is the strongest open model, (IV) human raters outperform all VLMs by a large margin, and (V) many open models (56\%) perform worse than the random baseline. By analyzing performance variability and relations across domains and tasks, we further show that task augmentation is a viable strategy for transforming single tasks into many and could serve as a blueprint for addressing dataset sparsity in various domains. | [
"VLM",
"Benchmark",
"Annotation",
"Ambiguity"
] | https://openreview.net/pdf?id=1CeIRl147S | https://openreview.net/forum?id=1CeIRl147S | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"fgItS52uPW",
"f40KkO4JcA",
"PsPAKPsMlp",
"8dpuVUtGEs"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1730914222959,
1730634436243,
1730320329764,
1731663125155
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6600/Reviewer_KsrF"
],
[
"ICLR.cc/2025/Conference/Submission6600/Reviewer_p8BK"
],
[
"ICLR.cc/2025/Conference/Submission6600/Reviewer_z3t7"
],
[
"ICLR.cc/2025/Conference/Submission6600/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a new paradigm for Vision-Language Models (VLMs) by creating multiple tasks from a single existing task, called task augmentation. This is achieved by re-annotating an existing benchmark with various tools for diverse purposes. The new paradigm is validated on the COCO and KITTI datasets. Extensive experiments on the created benchmarks are conducted, giving several interesting observations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1) The paper is well-structured and easy to follow.\\n\\n2) The paper thoughtfully considers building strong domain-specific VLM benchmarks while sparing human annotation costs. I agree that picking the right tasks is challenging.\\n\\n3) Building benchmarks on existing ones with re-annotations is a smart and efficient way to control data quality and diversity. The data curation pipeline may be helpful to the community.\\n\\n4) Extensive evaluation results are provided. Some observations are somewhat interesting.\", \"weaknesses\": \"1) Although the idea is smart, the applicability of the data re-annotation pipeline is unknown. Currently, it is demonstrated on COCO and KITTI where instance-level annotations are provided. It would be good to elaborate more about how to generalize the data generation pipeline.\\n\\n2) I do not make it clear how the proposed approach can address the challenges listed in Sec.1: domain-specific validation, picking the right tasks, balancing quantity and quality.\\n\\n3) The notes drawn from the evaluation results seem not new for authors. Similar conclusions can be seen in various VLM evaluation papers. \\n\\n4) I do not see a reason why the proposed approach can be more useful than existing evaluation benchmarks. A detailed comparison with existing ones should be presented.\\n\\n5) The paper lacks an analysis of the evaluation results or evaluation approach.\", \"questions\": \"$I(C_{i,q,m})$ in Eqn.(1) is not explained.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a domain-specific benchmark for evaluating Vision-Language Models (VLMs), utilizing a task augmentation technique. The benchmark provides interesting conclusions, such as considerable model performance variations across related domains. However, the primary contribution\\u2014the automatic and efficient task augmentation technique\\u2014warrants further examination. And some important details concerning the benchmark lack clarity. In summary, I think this work makes a valuable contribution but requires further revisions for publication.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents a new benchmark for evaluating VLMs, which contributes for the development of this field.\", \"weaknesses\": \"1. The core contribution, \\\"Automatic task augmentation\\\", as claimed in line 98, appears not to be \\\"automatic\\\" nor generally available. The dataset creation still involves considerable human efforts, including metadata annotation, rule-writing, task template design, and multi-round refinement of prompts (lines 308-309).\\n2. The concept of \\\"Task Augmentation\\\", although presented as new, has been thoroughly studied in previous works [1,2,3]. These works have explored methods of generating additional tasks using metadata or simple tasks for either model evaluation or instruction tuning.\", \"questions\": \"1. Could you offer a detailed demonstration of the human effort required in each dataset creation stage? This would help in understanding the resource-efficiency and automation of the \\\"Automatic task augmentation\\\" technique.\\n2. How does this benchmark compare to existing VLM benchmarks in terms of task quantity, question diversity, and problem difficulty? A thorough comparison would highlight the benefits of the proposed task augmentation method.\\n3. Can you clarify the task generation method using metadata? Is this done through pre-set question templates, generated by LLMs, or manual writing? A clear description of this would be valuable for reproduction. \\n4. Could you include the statistical data about the 25 tasks, such as the number of questions in each task?\\n\\n[1] Luo Z, Xu C, Zhao P, et al. Wizardcoder: Empowering code large language models with evol-instruct[J]. arXiv preprint arXiv:2306.08568, 2023.\\n[2] Muennighoff N, Liu Q, Zebaze A, et al. Octopack: Instruction tuning code large language models[J]. arXiv preprint arXiv:2308.07124, 2023.\\n[3] Shypula A, Madaan A, Zeng Y, et al. Learning performance-improving code edits[J]. arXiv preprint arXiv:2302.07867, 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a method for repurposing existing vision datasets to new visual tasks by leveraging the same imagery and obtaining additional metadata through a combination of human input, simple heuristic rules, and pre-trained models (e.g., segmentation and depth models). The generated data is then used to evaluate a comprehensive set of existing VLMs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses a critical issue: developing evaluation datasets for domain-specific benchmarking of VLMs\", \"It includes an extensive evaluation using a diverse set of VLMs across various model sizes, enhancing the robustness of the findings.\", \"The method demonstrates effectiveness, as even powerful models struggle with some tasks, demonstrating that the generated benchmark is challenging.\", \"Human validation is incorporated to ensure clarity of image-question pairs and reduce ambiguity.\"], \"weaknesses\": [\"While the authors formalized a pipeline for \\u201ctask augmentation,\\u201d the concept of repurposing existing imagery from available datasets and leveraging metadata (using off-the-shelf models or human input) to evaluate different tasks or augment training sets is well-explored in prior work. For instance, see [1],[2],[3],[4] among many others. In a way or another those benchmark repurpose existing vision datasets and use either humans or off-the-shelf models to generate additional metadata or VQA type questions.\", \"The paper initially frames itself as a method for generating validation data for domain-specific foundation models with predefined, specific purposes. However, most models evaluated are \\u201cgeneralist\\u201d VLMs rather than \\u201cspecialist\\u201d models. This is fine but the motivation and message should be adjusted accordingly. Additionally, while the motivation includes applications in fields like pathology and autonomous driving, no data or model relevant to these high-stakes areas is evaluated. Thus, the suitability of the pipeline for evaluating such specialized tasks remains uncertain.\", \"The writing could be further refined, as some sections take longer to convey main points. Streamlining sections such as the introduction, Section 2.2, and Section 3.3 could improve clarity and flow.\", \"While the proposed metric evaluation may be intuitive to the authors, incorporating more widely recognized metrics alongside individual scoring for each task could improve the benchmarks' accessibility and broader adoption.\", \"Some important figures, like Figure 4, are difficult to interpret due to crowding. Grouping models by parameter count or model family could help clarify these visuals. Models differing in parameter count by more than 10x may not need to be displayed together unless a significant point is being illustrated.\", \"In addition to releasing the code, sharing the final generated dataset could enhance its utility for the community, potentially offering greater practical value than the code alone.\", \"Overall, I recommend that the authors improve the writing and presentation, with an emphasis on the benchmark and findings as the main focus rather than the data generation pipeline.\", \"[1] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs\", \"[2] SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models\", \"[3] Reasoning Paths with Reference Objects Elicit Quantitative Spatial Reasoning in Large Vision-Language Models\", \"[4] Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild\"], \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Based on the reviewers' feedback, we are withdrawing this paper from ICLR in its current form. We thank the reviewers for their feedback and highlighting that our approach is a \\\"smart idea\\\" [KsrF] and our work \\\"addresses a critical issue\\\"[z3t7].\\nWe will incorporate the feedback.\\nFor clarification, the datasets, the benchmark and the annotations will be made available. \\nAdditionally our approach is mostly automatic and the user can even remove tasks that would require additional annotations. The approach works even without instance segmentation, given that the user utilizes a prompt method, such as SAM. However that would reduce the automation level, but is still very effective. \\nWe will ensure this information is communicated more clearly in the manuscript.\\nOnce more, thank you for your reviewing und providing valuable feedback.\"}"
]
} |
|
1CRu6bGx25 | Crack in the Armor: Universal Stability Measurement for Large Language Models | [
"Run Yang",
"Runpeng Dai",
"Shupeng Li",
"Penghao Zhao",
"Hongtu Zhu",
"Fan Zhou"
] | Large Language Models (LLMs) and Vision Language Models (VLMs) have become essential to general artificial intelligence, demonstrating impressive capabilities in task understanding and problem-solving. The real-world functionality of these large models critically depends on their stability. However, there is still a lack of rigorous studies examining the stability of LLMs when subjected to various perturbations.
In this paper, we aim to address this gap by proposing a novel influence measure for LLMs. This measure is inspired by statistical methods grounded in information geometry, offering desirable invariance properties. Using this framework, we analyze the sensitivity of LLMs in response to parameter or input perturbations.
To evaluate the effectiveness of our approach, we conduct extensive experiments on models of varying sizes, from 1.5B to 13B parameters. The results clearly demonstrate the efficacy of our measure in identifying salient parameters and pinpointing vulnerable areas of input images that dominate model outcomes. Our research not only enhances the understanding of LLM sensitivity but also highlights the broad potential of our influence measure in optimizing models for tasks such as model quantization and model merging. | [
"Large Language Models",
"sensitivity analysis",
"local influence measure"
] | https://openreview.net/pdf?id=1CRu6bGx25 | https://openreview.net/forum?id=1CRu6bGx25 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"t8Kvqdpa6H",
"qoEO0s7GEm",
"Xj8YqAR8qr",
"BNIWNs4m29"
],
"note_type": [
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1730342647557,
1732587354570,
1730772866719,
1729273141464
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4269/Reviewer_z9Gk"
],
[
"ICLR.cc/2025/Conference/Submission4269/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4269/Reviewer_igMG"
],
[
"ICLR.cc/2025/Conference/Submission4269/Reviewer_f5Nc"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a \\\"First-order Local Influence\\\" (FI) metric for quantifying LLM and VLM sensitivity to perturbations. By examining both internal (parameter) and external (input) perturbations, the FI metric aims to identify model weaknesses and improve robustness through selective parameter preservation. Experiments demonstrate FI\\u2019s potential in tasks like model quantization and merging.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"FI offers a theoretically grounded approach to assessing parameter and input sensitivity, which can support robustness improvements across applications.\", \"The paper provides experiments on various models, including applications of FI in quantization and model merging, demonstrating practical value.\"], \"weaknesses\": [\"The paper is positioned primarily around LLMs and VLMs, but these stability concerns are more broadly applicable to general ML. A broader contextual framing would benefit the paper.\", \"The choice to protect high-FI parameters during distillation/model merging is questionable since some high-FI parameters might correspond to irrelevant or \\u201cnonsensical\\u201d inputs.\", \"Prior works on Fisher Information Matrix (FIM) in pruning and parameter sensitivity (e.g., Frantar & Alistarh, 2023; Yu et al., 2024) and Sharpness-Aware Minimization (SAM) (Foret et al., 2021) are not mentioned. These are relevant for contextualizing FI's robustness contributions.\"], \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"A white-box method for identifying the most important/salient regions of an input for making a prediction is presented. The paper describes the method using the formalism of differential geometry and apply it to VLMs and LLMs. For VLMs, they apply their method to sensitivity analysis For LLMs, they apply their method to model quantization and model merging.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents interesting and novel applications of saliency maps to model quantization and model merging.\\n\\nThe paper strongly motivates the usage of saliency maps.\", \"weaknesses\": \"I currently suggest to reject this paper on the basis that I don't understand the novelty of the method and due to the lack of comparisons with the baselines. I am open to changing my mind, but I strongly encourage the authors to focus on those specific points in their response and be very clear how their method differs from existing works.\", \"novelty_of_the_method\": \"Not clear how the proposed method is different from other analysis methods such as Saliancy maps [1] and adversarial perturbations. A throughout analysis of the related methods should be presented and compared against in the paper.\", \"comparison_with_the_baselines\": \"Several compelling applications of the method are proposed, but no comparison to existing baseline methods tacking these applications. The authors should consider comparing the presented method against relevant baselines on standard benchmarks so that the reader can assess the usefulness of the method.\", \"clarity_of_the_paper\": \"I found the paper to be hard to follow. The paper introduces unnecessarily abstracts notions to describe the method. I don't understand why such abstraction is needed to describe the idea presented in the paper. Moreover, a lot of terms a unnecessarily defined. For example, $l(\\\\omega|y,x,theta)$ could be written as $\\\\log P(y|x,\\\\theta,\\\\omega)$ and $f(\\\\omega)$ as $-P(y_\\\\text{pred}|x,\\\\theta,\\\\omega$ and it would make the reading clearer. Some terms like $h_j$ are not clearly defined.\\n\\n[1] https://arxiv.org/abs/1312.6034\", \"questions\": [\"How does the method presented in the paper differs from existing works?\", \"Figure 3, Table 1, Figure 4: A baseline where the parameters are randomly selected is needed. What are the performances of such a baseline?\", \"Table 1, Figure 3: How does the method compares to other pruning methods such as the one presented in this survey [2]\", \"Table 2: How does the method compares to other model merging method such as the one presented in this survey [3]\", \"[2] https://arxiv.org/pdf/2308.06767\", \"[3] https://arxiv.org/pdf/2408.07666\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper presents a new approach for understand VLM predictions, especially relating to their robustness to perturbation. Their metric (FI) essentially estimates the change in the model output with respect to input (or parameter) perturbations. The authors test their metric by identifying for a range of images the pixels that affect model predictions the most and altering them. Furthermore, the authors test their approach with respect to input parameters by identifying crucial parameters that should be left intact during quantization, and validating that performance deteriorates less when they're not changed.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed approach seems effective in identifying input regions/parameters that have a large effect on model output.\", \"The authors test their approach in a range of applications.\", \"I like the application of identifying sensitive parameters that should remain intact during quantization/sparsification.\"], \"weaknesses\": [\"The paper does not compare against existing baselines.\", \"On the sensitivity to input pixels, how does this approach compare to Grad-CAM [1] and subsequent work? It is important to see a quantitative analysis.\", \"On the sensitivity to model parameters, it would be nice to see a comparison with existing approaches, e.g., [2], ...\", \"I feel there is too much going on in the paper: merging sensitivity to input images + parameters at the same time seems too much for a single project. I would suggest focusing on one and studying it in detail.\", \"Sensitivity of VLMs under different prompts is interesting but requires further analysis especially as to which changes in the prompts affect the influences, the semantic closeness of images and prompts, etc.\", \"[1] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Selvaraju et al., 2016.\", \"[2] Decomposing and Editing Predictions by Modeling Model Computation, Shah et al., 2024.\"], \"questions\": \"See weaknesses.\\n\\n- Can you provide more details about the compute cost of your approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1CLzLXSFNn | TimeMixer++: A General Time Series Pattern Machine for Universal Predictive Analysis | [
"Shiyu Wang",
"Jiawei LI",
"Xiaoming Shi",
"Zhou Ye",
"Baichuan Mo",
"Wenze Lin",
"Ju Shengtong",
"Zhixuan Chu",
"Ming Jin"
] | Time series analysis plays a critical role in numerous applications, supporting tasks such as forecasting, classification, anomaly detection, and imputation. In this work, we present the time series pattern machine (TSPM), a model designed to excel in a broad range of time series tasks through powerful representation and pattern extraction capabilities. Traditional time series models often struggle to capture universal patterns, limiting their effectiveness across diverse tasks. To address this, we define multiple scales in the time domain and various resolutions in the frequency domain, employing various mixing strategies to extract intricate, task-adaptive time series patterns. Specifically, we introduce TimeMixer++, a general-purpose TSPM that processes multi-scale time series using (1) multi-resolution time imaging (MRTI), (2) time image decomposition (TID), (3) multi-scale mixing (MCM), and (4) multi-resolution mixing (MRM) to extract comprehensive temporal patterns. MRTI transforms multi-scale time series into multi-resolution time images, capturing patterns across both temporal and frequency domains. TID leverages dual-axis attention to extract seasonal and trend patterns, while MCM hierarchically aggregates these patterns across scales. MRM adaptively integrates all representations across resolutions. TimeMixer++ achieves state-of-the-art performance across 8 time series analytical tasks, consistently surpassing both general-purpose and task-specific models. Our work marks a promising step toward the next generation of TSPMs, paving the way for further advancements in time series analysis. | [
"time series",
"pattern machine",
"predictive analysis"
] | Accept (Oral) | https://openreview.net/pdf?id=1CLzLXSFNn | https://openreview.net/forum?id=1CLzLXSFNn | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xQ3MlviDWh",
"wy3M6cZN3s",
"t8T1yPkCDa",
"r6JwMhIUzI",
"pNNJGSTYgT",
"nXSrQnTEMW",
"mFfzQkCqu3",
"lY5vWfbs2z",
"kSFvMmo42o",
"jtPPnd7K1X",
"jey1CXHkJL",
"gEnihjsiOU",
"eXZeISXbwv",
"ddH80g2x5a",
"a3W9a64BWy",
"X5qhV3Yy8C",
"VeSSLk1rq7",
"SlGWie1BDv",
"RfSIu0IlUv",
"PygUsAWjj8",
"NwUgZXsKfZ",
"KOpLQq2UXy",
"INY3Z444gw",
"HXgrkIBW7J",
"HM126iPB33",
"AojZpdtDZY",
"5qV2QWLYYz",
"0rEoqXIyBP"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1732196625183,
1732697873071,
1732290556123,
1732417281703,
1732189405768,
1732499349232,
1732196259000,
1742204157814,
1729529064966,
1732806787599,
1732533845260,
1732197808754,
1732286696740,
1737523981689,
1730190526493,
1734588538065,
1732245018916,
1732193892664,
1732195548824,
1730705154475,
1732226171921,
1739365945993,
1742204186952,
1732195212332,
1732188227956,
1732697762767,
1740838915167,
1739554766590
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Reviewer_9h69"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"~Shiyu_Wang3"
],
[
"ICLR.cc/2025/Conference/Submission9409/Reviewer_9h69"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Area_Chair_eU5n"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Reviewer_e5Jj"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9409/Reviewer_LKyA"
],
[
"ICLR.cc/2025/Conference/Submission9409/Area_Chair_eU5n"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Reviewer_e5Jj"
],
[
"ICLR.cc/2025/Conference/Submission9409/Reviewer_LKyA"
],
[
"~Kashif_Rasul1"
],
[
"~Shiyu_Wang3"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9409/Reviewer_9h69"
],
[
"~Danny_Dongyeop_Han1"
],
[
"~Shiyu_Wang3"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer LKyA [Part 1]\", \"comment\": \"We would like to sincerely thank Reviewer LKyA for providing a detailed review and insightful suggestions.\\n\\n> **Q1:** \\\"The fonts in the figures should be enlarged for better readability. For example, in Figure 1 (right), the label \\\"Benchmarking model performance across representation analysis in four tasks\\\" appears blurred. Additionally, consider using a single set of legends for all four tasks to enhance clarity.\\\"\\n\\nThank you for your valuable feedback. We appreciate your observation regarding the figure's readability. In the _$\\\\underline{\\\\text{revised paper}}$_, we have **updated Figure 1 and provided an enlarged and clearer version in Appendix E**. We believe these improvements improve the visual quality and interpretability of the figure.\"}",
"{\"comment\": \"It looks good of these improvements, thanks for your clarifications.\"}",
"{\"comment\": \"We appreciate your thoughtful review of our work and your recognition of its contributions. Your insightful comments have been useful in helping us improve our paper further.\"}",
"{\"title\": \"Summary of Revisions\", \"comment\": \"We sincerely thank all the reviewers for their detailed reviews,\\nwhich are instructive for us to improve our paper further.\\nWe have revised the paper according to the comments, and the \\nedits have been highlighted in **RED**. \\n\\nThis paper introduces TimeMixer++, a novel and general time series pattern machine for universal predictive analysis. \\nTimeMixer++ disentangles seasonality and trend patterns \\nin latent space through Time Image Decomposition (TID) and adaptively integrates these patterns using \\nMulti-Scale Mixing (MCM) and Multi-Resolution Mixing (MRM). **It achieves state-of-the-art performance \\nacross eight diverse time series tasks, outperforming 27 baselines on 30 benchmarks**.\\n\\nThe reviewers generally held positive opinions of our paper, noting that the proposed method **\\\"demonstrates innovation\\\"**\\nand\\nis **\\\"very interesting\\\"**; **\\\"the manuscript and appendix are well-prepared\\\"**;\\n**\\\"the experiments were very thorough\\\"** and **\\\"comprehensive\\\"**; and that\\nwe **\\\"present SOTA results\\\"**.\\n\\n\\nThe reviewers also raised insightful and constructive concerns. Here is the summary of the major revisions:\\n\\n- **Provide cross-domain zero-shot forecasting results (Reviewer e5Jj)**: \\nFollowing the reviewer's suggestion, we conducted a zero-shot forecasting experiment \\non two additional cross-domain datasets, M3 and M4. TimeMixer++ continues to perform best\\nin this setting. For complete results, please refer to Appendix D.\\n\\n- **Add inference time comparison (Reviewer e5Jj)**:\\nTo address the reviewer's request, we have reported the inference time of TimeMixer++ \\nalong with seven baselines on long-term forecasting tasks to provide a more comprehensive \\nunderstanding of the efficiency analysis. TimeMixer++ achieves an inference speed of 90 ms/iter, \\nwhich matches that of TiDE and surpasses other models. For the complete results, please refer to Appendix D.\\n\\n- **Add ablations on the channel mixing module (Reviewer e5Jj)**:\\nWe have included ablation results evaluating the choice of channel mixing based on a multi-layer \\nperceptron (MLP). The results demonstrate that our proposed attention-mixing approach consistently \\noutperforms MLP-mixing. The updated results can be found in Appendix D.\\n\\n- **Update the figures (Reviewer LKyA):**\\nFollowing the reviewer's suggestion, we have updated Figure 1 in the main text and Figure 12 in Appendix E.\\n\\n\\n- **Clarify differences among Koopa, TFDNet, and FEDNet (Reviewer 9h69)**:\\nWe clarify that TimeMixer++ is distinct from Koopa, TFDNet, and FEDNet in its technical design, pattern \\nlearning capabilities, and respective objectives within the literature.\\nAdditionally, we have updated the Introduction and Related Work sections, adding relevant citations to \\nhighlight our contributions.\\n\\nThe valuable suggestions from reviewers are very helpful for us to revise the paper to a better shape. We'd be \\nhappy to answer any further questions.\"}",
"{\"title\": \"Response to Reviewer e5Jj [Part 2]\", \"comment\": \"> **Q2:** \\\"As mentioned by authors, some time series tasks (imputation, anomaly detection) benefit more from diverse representations while others like forecasting and classification benefit from consistent representation. Given this, is there any way to leverage a routing model dependent on the proposed task type, which could lower the inference-time cost of this model?\\\"\\n \\nThank you for your insightful suggestion regarding routing mechanisms. Your idea offers valuable guidance for potential future research directions. To enhance efficiency in handling diverse time-series tasks, designing a multitask model with an Mixture-of-Experts (MoE) mechanism could offer a promising solution. In this design, specialized experts would dynamically adapt to different tasks, enabling diverse representations for imputation and anomaly detection, while maintaining consistent representations for forecasting and classification. Incorporating an MoE into the multi-resolution time imaging module and the multi-scale mixing module, as illustrated in Figure 2, could be a promising approach. We appreciate your thoughtful feedback and will consider exploring these possibilities in future work.\\n\\n\\n> **Q3:** \\\"MTS-Mixer presents another approach to channel decomposition which similarly outperformed competing models, but they found the approach worked best with MLPs rather than attention-based models. Have the authors explored this technique?\\\"\\n \\nThank you very much for your thoughtful comment. \\n- **Channel dependence is indeed considered valuable and informative**, as demonstrated by our ablation experiments in Figure 7 and recent research[1-3]. \\n- **Both MLP-based and attention-based models are currently the two dominant paradigms for handling channel dependencies.** Following MTS-Mixer, subsequent works such as Crossformer[1], iTransformer[2], and Moirai[3] have shown that attention mechanisms can also effectively capture channel dependencies.\\n\\nYour insight is valuable. We had conducted experiments to explore different combinations of strategies for channel mixing. The detailed results of these strategies are provided here for your reference:\\n\\n| Method | MLP-mixing (MSE) | MLP-mixing (MAE) | Attention-mixing (MSE) | Attention-mixing (MAE) |\\n|---------|------------------|------------------|------------------------|------------------------|\\n| 96 | 0.587 | 0.271 | 0.412 | 0.297 |\\n| 192 | 0.599 | 0.292 | 0.434 | 0.289 |\\n| 336 | 0.627 | 0.346 | 0.452 | 0.297 |\\n| 720 | 0.641 | 0.377 | 0.483 | 0.311 |\\n\\nWe evaluated two channel mixing strategies, MLP-mixing and Attention-mixing, on the Traffic dataset, which **comprises 862 channels**. \\nThe input context length is fixed at 96, and the prediction horizons are set to {96, 192, 336, 720}. \\n\\nFrom the table, we observe that in TimeMixer++, **Attention-mixing consistently outperforms MLP-mixing in terms of mean squared error (MSE) for all prediction horizons**, particularly at shorter horizons. For example, at the 96-step prediction horizon, Attention-mixing achieves an MSE of 0.412, which is a **29.8% improvement** over MLP-mixing. Similarly, at the 720-step horizon, the MSE reduction with Attention-mixing is **24.6%**.\\n\\nThese results have also been included in the Appendix D of the _$\\\\underline{\\\\text{revised paper}}$_.\\n\\n- [1] Zhang, Yunhao, and Junchi Yan. \\\"Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting.\\\" The eleventh international conference on learning representations. 2023.\\n- [2] Liu, Yong, et al. \\\"itransformer: Inverted transformers are effective for time series forecasting.\\\" arXiv preprint arXiv:2310.06625 (2023).\\n- [3] Woo, Gerald, et al. \\\"Unified training of universal time series forecasting transformers.\\\" arXiv preprint arXiv:2402.02592 (2024).\"}",
"{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your valuable review, which has inspired us to improve our paper further. \\nThis is a kind reminder that it has been four days since we submitted our rebuttal. We kindly ask if our responses have addressed your concerns.\\n\\nFollowing your suggestions, we have implemented the following updates:\\n\\n- **Clarify the position of our work**, emphasizing that we introduce a practical and innovative model that empirically \\nadvances the state of the art across eight time series analysis tasks.\\n- **Revise the introduction and related work sections** in the _$\\\\underline{\\\\text{revised paper}}$_, **adding relevant citations** and providing a \\ndetailed discussion of the mentioned works.\\n- **Provide a detailed comparison between TimeMixer++ and TimeMixer**, including a result summary table to enhance clarity and insight.\\n\\n\\n \\nIn this paper, we propose TimeMixer++ as a general Time Series Pattern Machine (TSPM), supported by extensive experiments, visualizations, \\nand ablations to substantiate our claims. \\nAll the revisions have been incorporated into the _$\\\\underline{\\\\text{revised paper}}$_ for your review.\\n\\nThank you again for your dedication and feedback. We look forward to hearing your thoughts on our revisions.\"}",
"{\"title\": \"Response to Reviewer 9h69 [Part 4]\", \"comment\": \"> **Q1&2:** \\\"What is the naming of the method, i.e., TimeMixer++ in terms of? What is the connection with the method TimeMixer?\\\"\\n\\nThank you for your question.\\n\\n**Similarities:** \\nBoth **TimeMixer** and **TimeMixer++** model multi-scale time series by decomposing them into seasonal and trend components, which are subsequently mixed to capture underlying temporal patterns.\\n\\n**Differences**: \\n**1. Decomposition**: \\n- **TimeMixer**: Uses moving averages for seasonal and trend decomposition, which is limited in flexibility. \\n- **TimeMixer++**: Replaces moving averages with **axial attention** applied to **multi-resolution time images**, enabling more precise and adaptive pattern extraction **in the latent space**.\\n\\n**2. Mixing Strategies**: \\n- **TimeMixer**: Relies **solely on hierarchical MLPs for temporal mixing** and **ignores channel mixing**.\\n- **TimeMixer++**: Introduces **hierarchical convolutions** with **inception blocks** for parameter-efficient mixing and enhances **channel mixing** through the use of channel-wise attention.\\n\\n**3. Roles as general TSPM**: \\n- It is important to emphasize the fundamental difference in objectives. **TimeMixer++**, as a **General TSPM**, is designed to handle **general time series analysis tasks**. Its primary goal is to develop powerful representation capabilities that enable robust performance across diverse tasks.\\n- In contrast, TimeMixer is specifically designed to optimize **time series forecasting**.\\n- We present a **concise summary** of the average performance of these models across multiple tasks, demonstrating the effectiveness and adaptability of TimeMixer++.\\n\\n\\n| Method | Long term Forecasting | | Uni. Short Term Forecasting |Mul. Short Term Forecasting | Imputation | | Few-Shot | | Zero-Shot | | Classification | Anomaly Detection |\\n|---------------|-----------------------|------------------|-----------------------------|------------------|-----------------------------|------------------|------------|------------------|----------|------------------|-----------|------------------|\\n| | MSE | MAE | SMAPE | MAPE | MSE | MAE | MSE | MAE | MSE | MAE | ACC (%) | F1-Score (%) |\\n| TimeMixer++ | **0.286** | **0.301** | **11.448** | **10.08** | **0.063** | **0.149** | **0.332** | **0.371** | **0.386**| **0.408** | **75.9** | **87.47** |\\n| TimeMixer | 0.314 | 0.329 | 11.723 | 10.59 | 0.103 | 0.212 | 0.374 | 0.390 | 0.467 | 0.446 | / | / |\"}",
"{\"title\": \"Part 1: Thank you for your thoughtful comments.\", \"comment\": \"Dear Danny,\\n\\nThank you for your thoughtful and detailed questions! We're very glad to see your interest in applying TimeMixer++ to medical time series data. Below, we\\u2019ve provided answers to your questions:\\n\\n**1. Regarding the SSL pretext task on point-wise token or patch-wise token**\\n\\nYour question is profound and highly relevant, as it touches on a fundamental issue in time series modeling: how to define and process tokens for time series data. Currently, there are two mainstream approaches: **point-wise tokens** [1] [5] [6] and **patch-wise tokens** [4]. Additionally, Autoformer[2] and iTransformer[3] propose a third perspective, namely **series-wise tokens**. To address this question, it is necessary to revisit the concept of a token.\\n\\nIn its simplest terms, a token is a term borrowed from NLP[9,10], where **a token is the smallest unit of text processed by a model, which can be a word, subword, character, or special symbol, depending on the tokenization method used**. This concept has been adapted for time series modeling. Fundamentally, a token represents the smallest unit of data processed by a model. A straightforward approach is to treat each individual point in a time series as a token, which has been the dominant paradigm in time series modeling for a long time.\\n\\nHowever, starting with PatchTST, inspired by the success of Vision Transformers (ViT)[8] in the image domain, a new idea emerged: introducing the concept of **patches** into time series modeling. This involves grouping subsequences into patches and treating each patch as a token. This raises a critical question: **how should patches be defined in time series data?** The current approach, largely following PatchTST, involves extracting fixed-length subsequences from the original series using a sliding window. However, this approach faces a significant challenge in time series data: the trends and periodicities of different sequences often vary greatly. Time series data contain both rich global features and diverse local features, and the choice of patch length and sliding window strategy can significantly impact the model. This essentially becomes a hyperparameter selection problem. Poor choices can not only degrade prediction accuracy but may also distort the inherent characteristics of the time series.\\n\\nGiven this, **point-wise tokens**, which preserve the most complete temporal information, remain a reasonable choice. On the other hand, iTransformer[3] takes an entirely different approach by treating the entire input sequence as a single token, offering a novel perspective.\\n\\nReturning to the original question, none of these approaches\\u2014whether point-wise tokens, patch-wise tokens, or series-wise tokens\\u2014are likely to be perfect solutions. Time series data differ significantly from text and images, and directly transplanting solutions from these domains may not always be appropriate. This is an area that requires further exploration.\\n\\nThat said, **patch-wise tokens** have a notable advantage in the context of pretraining time series models: they reduce the number of tokens compared to point-wise tokens (since multiple points are grouped into a single patch), thereby improving training and inference efficiency. This is likely one of the reasons why many time series foundation models have adopted this approach. Time-MoE[7], like our approach, continues to use point-wise tokens. However, it leverages the design of **Multi-resolution Forecasting**, which helps improve efficiency to some extent.\\n\\n\\n**2. Regarding the any-variate input variables**\\n\\nTo be frank, our work has not focused on the problem of **any-variate input variables**, but it is indeed a key and trending topic in the time series domain. To achieve any-variate input variables, significant modifications to the current model architecture would be required. In fact, handling any-variate inputs is an inherent advantage of transformers, as the attention mechanism is naturally capable of addressing this issue. However, as you pointed out, using projections to transform variable dimensions imposes parameterized constraints, which limit the model's ability to handle truly any-variate input variables.\\n\\nAs mentioned earlier, iTransformer is a **series-wise token** model, and its architecture is specifically designed to address this challenge. If we aim to achieve similar functionality, we could take inspiration from its architectural design.\\n\\nRegarding the question of **why channel mixing is performed during the input projection stage**, the reason lies in the need to enable early interactions between different channels. As the input to subsequent modules, channel mixing at the input projection stage allows for the earliest possible exchange of information across channels. This is analogous to why it is necessary to embed time series points at the very beginning of the process.\"}",
"{\"summary\": \"The paper introduces a time series pattern machine method called TimeMixer++ for processing multiscale time series. The method transforms time series into multi-resolution time images to enable pattern extraction with respect to temporal and frequency domains followed by three dominant modules - (1) input projection; (2) a stack of Mixerblocks; (3) output projection. Extensive experiments show the proposed method obtains improvement over the well-established competing methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The methods used in the paper (e.g., time imaging and image decomposition) are very interesting. The evaluation is comprehensive: the authors discuss long and short-term forecasting, zero-short forecasting, classification, and anomaly detection.\", \"weaknesses\": \"In terms of the forecasting results shown in Tables 3 and 4, the performance gain is negligible, and such minor improved performance certainly can be attributed to the parameter tuning, e.g., a well-tuned parameter settings for TimeMixer++ while a weak parameter settings for other competing methods.\\n\\nThe paper barely offers insights both theoretically and experimentally. The theoretical understanding of the improvement as well as its time imaging and multi-resolution mixing is lacking, mostly based on intuition and simply blending the models. \\n\\nThere are some papers that already discussed the use of frequency analysis and the frequency components extraction for model deployment (e.g., [1][2][3]) to capture the periodic patterns, and they all claim it can capture the global interaction and patterns among time series, so what is the benefits of introducing multi-resolution time imaging, and it is worthwhile to compare them in ablation study? In addition, it is encouraged to cite the papers [1][2][3] if not yet in the references.\", \"references\": \"[1] Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors https://arxiv.org/pdf/2305.18803\\n[2] TFDNet: Time-Frequency Enhanced Decomposed Network for Long-term Time Series Forecasting https://arxiv.org/abs/2308.13386\\n[3] FEDNET: FREQUENCY ENHANCED DECOMPOSED NETWORK FOR OUT-OF-DISTRIBUTION TIME SERIES CLASSIFICATION https://openreview.net/forum?id=OVu9DsOjgH\", \"questions\": \"[1] Questions follow from the points listed in the weakness section.\\n[2] What is the naming of the method, i.e., TimeMixer++ in terms of? or just because both the methods target processing multi-scale time series? \\n[3] What is the connection with the method TimeMixer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for raising the score!\\nThe results listed in Part 2 were calculated as the mean of the results reported for each experiment in the original paper.\\nThe parameter settings and references are detailed in Appendix A of the _$\\\\underline{\\\\text{revised paper}}$_.\\nSpecifically,\\nwe set the initial learning rate as $10^{-2}$ or $10^{-3}$ and use the ADAM optimizer with L2 loss for model optimization.\\nThe batch size is 512.\\nBy default, we configure the number of MixerBlocks $L$ to 2 and set the number of resolutions $K$ to 3.\\nWe choose the number of scales $M$ according to the time series length to balance performance and efficiency.\\nTo handle longer series in long-term forecasting, we set $M$ to 3. As for short-term forecasting with\\nlimited series length, we set $M$ to 1.\\nFor baselines under the same experimental settings as our main study, we directly report the results from TimesNet [2], \\nfollowing standard practice as in prior works [1, 4, 5].\\nIn scenarios where experimental settings differed or tasks were not previously implemented, we reproduced the baseline results using\\nthe benchmark framework from the Time-Series Library [2, 3]. This framework is well-known and widely adopted in existing studies [1, 2, 5] and ensures \\nreproducibility and consistency. We have supplemented and refined the relevant parameter settings and references in the _$\\\\underline{\\\\text{revised paper}}$_.\\n\\nIf there are any aspects of the paper that you feel require further clarification or improvement, we would be happy to address them. \\nWe sincerely appreciate your recognition of our work, and we hope to receive your further support and constructive feedback moving forward.\\nThank you once again for your time and insightful suggestions.\\n\\n- [1] Liu, Yong, et al. \\\"itransformer: Inverted transformers are effective for time series forecasting.\\\" arXiv preprint arXiv:2310.06625 (2023).\\n- [2] Wu, Haixu, et al. \\\"Timesnet: Temporal 2d-variation modeling for general time series analysis.\\\" arXiv preprint arXiv:2210.02186 (2022).\\n- [3] https://github.com/thuml/Time-Series-Library\\n- [4] Bian, Yuxuan, et al. \\\"Multi-patch prediction: Adapting llms for time series representation learning.\\\" arXiv preprint arXiv:2402.04852 (2024).\\n- [5] Liu, Xu, et al. \\\"Unitime: A language-empowered unified model for cross-domain time series forecasting.\\\" Proceedings of the ACM on Web Conference 2024. 2024.\", \"title\": \"Thank you for your Feedback and Request for Further Support\"}",
"{\"title\": \"Acknowledge the author responses\", \"comment\": \"Dear Reviewers,\\n\\nThank you very much for your effort. As the discussion period is coming to an end, please acknowledge the author responses and adjust the rating if necessary.\\n\\nSincerely,\\nAC\"}",
"{\"title\": \"Response to Reviewer LKyA [Part 3]\", \"comment\": \"> **Q3:** \\\"More detail on how it compares to recent models like TimesNet and iTransformer on specific time series tasks would strengthen the paper\\u2019s claims.\\\"\\n\\nThank you for your feedback! We would like to compare TimeMixer++, TimesNet, and iTransformer in terms of **model design, benefits, empirical evidence, and implementation details.**\\n\\n**(1) Model Design**\\n\\nThe strength of TimeMixer++ lies in its **flexible and effective pattern decomposition and mixing strategies**. Specifically, TimeMixer++ processes multi-scale time series using four key components: (1) Multi-Resolution Time Imaging (MRTI), (2) Time Image Decomposition (TID), (3) Multi-Scale Mixing (MCM), and (4) Multi-Resolution Mixing (MRM).\\n\\n\\nWhile **TimesNet** also analyzes time series in the frequency domain by transforming 1D time series into 2D tensors, there are key differences: \\n- **Pattern Disentanglement**: TimesNet does not disentangle seasonal and trend patterns, limiting its flexibility in handling complex time series data across diverse tasks. \\n- **Mixing Strategies**: TimeMixer++ defines multiple scales in the time domain and various resolutions in the frequency domain through down-sampling. It employs task-adaptive strategies like MCM and MRM to extract representative patterns. TimesNet, however, **overlooks time-domain mixing**, which reduces its adaptability.\\n\\n\\nWhile **iTransformer** applies attention mechanisms for channel mixing, there are notable differences: \\n- **Building Blocks**: iTransformer primarily uses **feed-forward networks (FFN)** for encoding 1D time series. In contrast, TimeMixer++ transforms time series into **multi-resolution 2D time images**, enabling dual-axis attention for pattern decomposition and hierarchical convolutions for mixing. \\n\\n**(2) Benefits of TimeMixer++**\\n\\nThe flexibility of TimeMixer++ is evident in Figure 1 (right), which highlights the relationship between **CKA similarity** and task effectiveness across different scenarios.\\n\\n- **In forecasting**, TimeMixer++ demonstrates a clear advantage with the **highest CKA similarity (0.94)** and the lowest MSE (0.23), showcasing its ability to effectively align consistent representations with task-specific needs. A similar trend is observed in anomaly detection tasks.\\n\\n- **For classification**, TimeMixer++ demonstrates superior adaptability, achieving the best accuracy (90%) with **the lowest CKA similarity (0.75)**, effectively learning diverse representations. A similar trend is observed in imputation tasks.\\n\\nBy **dynamically adapting** to the diverse CKA-effectiveness relationships across tasks, **TimeMixer++ consistently outperforms TimesNet and iTransformer**, demonstrating superior flexibility and effectiveness in extracting diverse time series patterns.\\n\\n\\n**(3) Empirical Evidence**\\n\\n| Method| Long term Forecasting || Uni. Short Term Forecasting | Mul. Short Term Forecasting | Imputation | | Few-Shot | | Zero-Shot | | Classification | Anomaly Detection |\\n|--|----|----|----|------|---|---|----|-------|-----|-----|--------|------|\\n| | MSE| MAE| SMAPE| MAPE| MSE| MAE | MSE| MAE| MSE| MAE| ACC (%) | F1-Score (%) |\\n| TimeMixer++ | **0.286**| **0.301**| **11.448**| **10.08**| **0.063**| **0.149**| **0.332** | **0.371**| **0.386**| **0.408** | **75.9**|**87.47**|\\n| TimesNet| 0.363| 0.347| 11.829| 12.69| 0.085| 0.180| 0.491| 0.446| 0.527| 0.465| 73.6| 86.34 |\\n| iTransformer | 0.310| 0.305| 12.684| 12.55| 0.103| 0.191| 0.394| 0.442| 0.444 | 0.434| 70.5 | 76.98|\\n\\n- These experimental results validate the design advantages of TimeMixer++, demonstrating how its flexible and effective pattern decomposition and mixing strategies\\u2014enabled by components like MRTI, TID, MCM, and MRM\\u2014**consistently outperform competing models like TimesNet and iTransformer across diverse tasks**.\\n\\n**(4). Implementation Details**\\n\\nThe experimental setup is detailed in Appendix A. **For baselines with the same experimental settings** as our main study, we **directly report** the results from TimesNet [2]. For scenarios **where the settings differ or tasks are not implemented**, we **reproduced** the baselines using the benchmark framework from the **time series library** [1], which is **widely adopted in existing studies** [2,3] and ensures high consistency. The details of the hyperparameter configurations are provided in Appendix A. This pipeline is essential given the scope of our evaluation, which **includes 27 baselines and 30 benchmarks**. Deviating from this approach would introduce inconsistencies and undermine the reliability of the results.\\n\\nWe hope this additional information addresses the reviewer\\u2019s concerns.\\n\\n\\n- [1] https://github.com/thuml/Time-Series-Library\\n- [2] Wu, Haixu, et al. \\\"Timesnet: Temporal 2d-variation modeling for general time series analysis.\\\" arXiv preprint arXiv:2210.02186 (2022).\\n- [3] Liu, Yong, et al. \\\"itransformer: Inverted transformers are effective for time series forecasting.\\\" arXiv preprint arXiv:2310.06625 (2023).\"}",
"{\"comment\": \"Thank you for the comprehensive response. It is clear from the additional experiments analyzing cross domain performance and inference complexity that TimeMixer++ retains its advantages over other methods.\\n\\nThe considerations for future work and alternative model architectures add additional colour to your work. I will retain my current rating.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}",
"{\"summary\": \"The paper presents TimeMixer++, an advanced framework designed to enhance general time series analysis. TimeMixer++ integrates multi-resolution time imaging, multi-scale mixing, and dual-axis attention mechanisms to effectively capture and adapt to diverse patterns within time series data. This innovative approach allows for robust and flexible analysis across various temporal scales and resolutions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. the authors introduce a robust framework TimeMixer++ that leverages multi-resolution time imaging, multi-scale mixing, and dual-axis attention to enhance general time series analysis. They present SOTA results on four different tasks.\\n2. the integration of both multi-scale and multi-resolution mixing strategies for adaptive pattern extraction demonstrates innovation.\\n3. the manuscript and appendix are well-prepared, but the authors have not yet released the promised code.\", \"weaknesses\": \"1. the fonts in the figures should be enlarged for better readability. For example, in Figure 1 (right), the label \\\"Benchmarking model performance across representation analysis in four tasks\\\" appears blurred. Additionally, consider using a single set of legends for all four tasks to enhance clarity.\\n2. the source code repository has not released for reproducing, i will consider raising the score if the released repository and the consistency of the results.\\n3. more detail on how it compares to recent models like TimesNet and iTransformer on specific time series tasks would strengthen the paper\\u2019s claims.\\n4. including a discussion on computational efficiency (e.g., FLOPs, memory usage) for different tasks could enhance the paper\\u2019s utility.\", \"questions\": \"See weaknesses (W2, W3, W4).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper presents TimeMixer++, a general-purpose model for various time series tasks, including forecasting, classification, anomaly detection, and imputation. The proposed model achieves state-of-the-art performance across 8 time series analytical tasks, consistently surpassing both general-purpose and task-specific models. These results are obviously impressive, and this paper is worthwhile to receive further attention. Thus, I would like to recommend an accept as a spotlight paper.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers were satisfied with the authors' responses during the discussion period. One reviewer increased his/her rating.\"}",
"{\"comment\": \"We are thrilled that our responses have effectively addressed your questions and comments. We would like to express our sincerest gratitude for taking the time to review our paper and provide us with such detailed feedback.\"}",
"{\"title\": \"Response to Reviewer 9h69 [Part 1]\", \"comment\": \"Many thanks to Reviewer 9h69 for providing the insightful review and comments.\\n\\n> **W1:** \\\"In terms of the forecasting results shown in Tables 3 and 4, the performance gain is negligible, and such minor improved performance certainly can be attributed to the parameter tuning.\\\"\\n\\nWe would like to respectfully clarify that **the performance improvements achieved by TimeMixer++ are both substantial and consistent across all datasets and metrics, far exceeding what could be attributed to parameter tuning.** Below, we provide a focused discussion to address this concern.\\n\\n- As shown in **Table 3 (Forecasting Results)**, TimeMixer++ **consistently achieves the best performance** across all three metrics (MAE, MAPE, and RMSE). For example, TimeMixer++ achieves an **8.6%** relative improvement in MAE compared to the second-best method, TimeMixer. Additionally, it **significantly outperforms strong baselines** such as SCINet, Crossformer, and PatchTST, with relative reductions in MAE of **16.8%, 16.4%, and 30.9%**, respectively. These consistent improvements across datasets and metrics highlight TimeMixer++'s effectiveness, which **cannot be explained by minor parameter adjustments**.\\n\\n- In **Table 4 (Imputation Results)**, TimeMixer++ delivers the **best performance on 11 out of 12 metrics** across six datasets. Compared to TimesNet, the second-best model, TimeMixer++ achieves a **25.6% improvement** in average MSE (0.0632) and a **17.4% improvement** in average MAE (0.1487). These results further emphasize the model's robustness and ability to generalize across diverse imputation tasks.\\n\\nRegarding the concern about parameter tuning, we want to emphasize that **we ensured a rigorous and fair comparison by following a consistent experimental pipeline**. Specifically, for baselines with the same experimental settings as our main study, we **directly report** the results from TimesNet [2]. For scenarios where the settings differ or tasks were not implemented, we reproduced the baselines **using the benchmark framework from the time series library** [1], which is widely adopted in existing studies [2,3] and ensures high consistency. This pipeline is essential given the scope of our evaluation, which **includes 27 baselines and 30 benchmarks**. Deviating from this approach would introduce inconsistencies and undermine the reliability of the results. The experimental setup is detailed in Appendix A.\\n\\nIn summary, **the substantial and consistent improvements achieved by TimeMixer++, as demonstrated in Tables 3 and 4, clearly reflect the robustness and effectiveness of our approach.** These results were obtained by rigorously adhering to the established experimental pipeline.\\n\\n- [1] https://github.com/thuml/Time-Series-Library\\n- [2] Wu, Haixu, et al. \\\"Timesnet: Temporal 2d-variation modeling for general time series analysis.\\\" arXiv preprint arXiv:2210.02186 (2022).\\n- [3] Liu, Yong, et al. \\\"itransformer: Inverted transformers are effective for time series forecasting.\\\" arXiv preprint arXiv:2310.06625 (2023).\"}",
"{\"title\": \"Response to Reviewer 9h69 [Part 3]\", \"comment\": [\"> **W3:** There are some papers that already discussed the use of frequency analysis and the frequency components extraction for model deployment (e.g., [1][2][3]) to capture the periodic patterns, and they all claim it can capture the global interaction and patterns among time series, so what is the benefits of introducing multi-resolution time imaging, and it is worthwhile to compare them in ablation study? In addition, it is encouraged to cite the papers [1][2][3] if not yet in the references.\", \"We appreciate the reviewer\\u2019s insightful comments.\", \"While we recognize the relevance of the mentioned works, **a direct ablation comparison with these three studies may not be the most appropriate approach due to their distinct differences in model design, pattern learning capabilities, and respective objectives within the literature.**\", \"**(1) For Koopa [1]**\", \"**The architecture of TimeMixer++ is fundamentally different from Koopa**. While Koopa leverages the Fast Fourier Transform (FFT) to extract time-variant and time-invariant components, it does not incorporate advanced mixing strategies, as it primarily relies on modern Koopman theory. Specifically, Koopa processes the two components separately using a purely MLP-based encoder-decoder architecture and combines their outputs **through summation** to make predictions. In contrast, TimeMixer++ employs two mixing strategies to hierarchically and adaptively learn the input-output mapping function.\", \"Moreover, **Koopa is designed solely for forecasting tasks**, as presented in their paper. In contrast, **TimeMixer++ adopts learnable and flexible decomposition and mixing strategies, enabling it to achieve superior performance across general time series analysis tasks**, effectively serving as a comprehensive Time Series Pattern Machine (TSPM).\", \"**(2) For TFDNet [2]**\", \"**The architecture of TimeMixer++ is also fundamentally different from TFDNet**. While TFDNet leverages the seasonal-trend decomposition and mixing strategy, the decomposition is conducted by moving average directly. Besides, **it does not incorporate advanced mixing strategies**. Similar to Koopa, TFDNet processes the two components separately using kernel strategies and feed-forward network (FFD), and finally combines their outputs **through concatenation** to make predictions.\", \"Moreover, **TFDNet is also designed solely for forecasting tasks**, as presented in their paper. In contrast, **by adopting flexible decomposition and mixing strategies, TimeMixer++ achieves superior performance across a wide range of time series analysis tasks**.\", \"**(3) For FEDNet [3]**\", \"**The training paradigm and architecture of TimeMixer++ are fundamentally different from those of TFDNet**. While FEDNet employs frequency-domain decomposition to separate time-variant and time-invariant components, which are processed using **encoder-decoder architectures**, its training paradigm relies on **contrastive learning**.\", \"Moreover, FEDNet was proposed to address **out-of-distribution (OOD) time series classification problems**. Their code is **no longer accessible** at this stage, making it infeasible for us to conduct the experiments.\", \"We highly appreciate reviewer's efforts and valuable feedback. In response to your comments, **we have updated the introduction and related work sections in the _$\\\\underline{\\\\text{revised paper}}$_, adding the corresponding citations to better clarify the position of our work.**\", \"[1] Liu, Yong, et al. \\\"Koopa: Learning non-stationary time series dynamics with Koopman predictors.\\\" *Advances in Neural Information Processing Systems 36* (2024).\", \"[2] Luo, Yuxiao, Ziyu Lyu, and Xingyu Huang. \\\"TFDNet: Time-Frequency Enhanced Decomposed Network for Long-term Time Series Forecasting.\\\" *arXiv preprint arXiv:2308.13386* (2023).\", \"[3] FEDNet: Frequency Enhanced Decomposed Network for Out-of-distribution Time Series Classification. https://openreview.net/forum?id=OVu9DsOjgH\"]}",
"{\"summary\": \"The paper presents TIMEMIXER++, a general-purpose model for various time series tasks, including forecasting, classification, anomaly detection, and imputation. Utilizing a multi-scale, multi-resolution framework, the proposed method transforms time series data into multi-resolution images to capture complex temporal and frequency-domain patterns, enabling flexibility across analytical applications. The model\\u2019s approach includes dual-axis attention for decomposing seasonal and trend components and hierarchical multi-scale and multi-resolution mixing to integrate patterns across scales. The proposal achieves strong performance across eight benchmark tasks, outperforming both general-purpose and task-specific models. This work contributes to advancing time series analysis with new state-of-the-art benchmarks across settings.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"S1. The proposed model captures both short- and long-term dependencies by transforming time series data into multi-resolution images, enabling the analysis of complex temporal and frequency-domain patterns that challenge traditional models. The authors validate this with experimental results showing the new architecture outperforms SOTA models on most standard benchmarks. The ablation study helps validate the importance of the individual parts of the architecture \\u2013 the channel mixing, image decomposition, and multi-scale and multi-resolution mixing. This approach continues to validate the benefits of integrating image analysis techniques with time series tasks.\\n\\nS2. The architecture is flexible for supporting different kinds of time-series tasks. The hierarchical multi-scale and multi-resolution mixing modules enable the model to flexibly adapt across various time series tasks, from forecasting to anomaly detection, promoting robust and accurate performance across applications.\\n\\nS3. Empirical Validation: the testing in this paper on eight benchmark time series tasks, including the hyperparameter ablation results, shows TIMEMIXER++ consistently surpasses both general-purpose and task-specific models, affirming its potential as a high-performance, general-purpose solution for time series analysis. The experiments were very thorough.\", \"weaknesses\": \"W1. There is little exploration of scaling of model size, which would be an interesting avenue for validating the model architecture in a zero shot setting. The current zero-shot experiments are primarily in-domain and not cross-task.\", \"questions\": \"Q1. The proposed architecture adds significant computational cost to the internal representation of the model compared to vanilla transformers and some of the previously proposed models. It seems this does not have a significant effect on the training time compute and memory complexity of the model. Have the authors conducted any studies to compare the inference-time cost of TM++ compared to other methods?\\n\\nQ2. As mentioned by authors, some time series tasks (imputation, anomaly detection) benefit more from diverse representations while others like forecasting and classification benefit from consistent representation. Given this, is there any way to leverage a routing model dependent on the proposed task type, which could lower the inference-time cost of this model?\\n\\nQ3. MTS-Mixer (Li et. al., 2023) presents another approach to channel decomposition which similarly outperformed competing models, but they found the approach worked best with MLPs rather than attention-based models. Have the authors explored this technique for separating from attention mechanisms which could lead to further efficiency and model performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your rebuttal. As promised, I have updated my rating since you updated the anonymous GitHub repository.\"}",
"{\"title\": \"MixerBlock questions\", \"comment\": \"Why are you referring to the inputs of the mixer block as time series? presumably, at that point, the inputs are representations of the original multivariate data of feature-length $d_\\\\mathrm{model}$ coming from the channel attention?\\n\\nwhy are seq of representations $\\\\mathbf{x}^l_m$ a 1-D time series in line 243? If it's the output from the channel attention it has shape $d_\\\\mathrm{model} \\\\times T/2^m$ no?\\n\\nWhat is the motivation for decomposing the representations from the channel attention layer? My intuition was that at that point the output from the channel attention would not resemble time series, like trend or seasonality, even more so up the layers?\\n\\nDid you compare your model to prototypical mixer architectures like MLP-mixer, Conv-Mixer, and ViT style models that can easily be up-cylced for the multivariate forecasting task?\\n\\nthank you!\"}",
"{\"comment\": \"**3. Regarding the any-variate input lengths**\\n\\nIn fact, handling **any-variate input lengths** is typically a strength of transformer models[10], particularly those with **decoder-only** or **encoder-decoder** architectures, which are inherently designed to process variable-length sequences. On the other hand, **encoder-only** architectures are generally limited to handling fixed-length sequences. Modifications to the underlying architecture would be necessary to enable our model to handle variable-length input sequences. This could involve approaches such as padding or adopting a decoder-only architecture. I would suggest referring to the design of Time-MoE[7], which is specifically tailored for variable-length sequences and serves as a large-scale time series model.\\n\\n\\n**4. Regarding the data always has FFT amplitudes with a 1/f trend**\\nYou mentioned that the inherent 1/f trend in your data causes the Top-K frequency selection in MRTI to always return the lowest K frequencies, potentially overlooking high-frequency features that are crucial for certain classification tasks. Our recommendations are as follows:\\n\\n**Using a Detrending Method**: If high-frequency features are critical to your task, applying a detrending method (e.g., removing the 1/f component) before selecting the Top-K frequencies is a reasonable choice. This approach will help you fairly select key features across the frequency spectrum without being biased by the 1/f trend.\\n\\n**Retaining the Original Method**: If your task is more sensitive to low-frequency features, or if the 1/f trend itself contains important information, retaining the original method might be preferable. The final choice should depend on your understanding of the data characteristics and task requirements.\\n\\nWhether to modify the Top-K selection method ultimately depends on the needs of your task. If high-frequency features are indeed essential, we recommend trying detrending before frequency selection.\\n\\n\\n**5. Regarding the $p_k$ or $p_{k,m}$**\\n\\nThe reason we do not run FFT and pick the Top-K periods at different scales is that the Top-K periods calculated at the original scale cannot be directly applied to the downsampled scales. After downsampling, the lower scales already incorporate global, macro-level information, making it more appropriate to use a unified set of Top-K periods. Additionally, calculating separate Top-K periods for each scale would introduce a nested loop structure, significantly increasing computational complexity.\\n\\n**6. Regarding the Details on the 2D convolutions**\\n\\nWe adopted TimesNet's implementation of the Inception_Block [5] and followed the same configuration. You may refer to it for further details.\\n\\nThank you once again for your interest in our work. Medical time series is a vast and significant field, and exploring this area is crucial for real-world applications. Your insightful questions have been incredibly valuable and have provided us with meaningful inspiration.\\n\\nBest regards,\\n\\nAuthors.\\n\\n[1] Zhou, Haoyi, et al. (2021). Informer: Beyond efficient transformer for long sequence time-series forecasting. In *Proceedings of AAAI Conference on Artificial Intelligence*.\\n\\n[2] Wu, Haixu, et al. (2021). Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. *Advances in Neural Information Processing Systems*.\\n\\n[3] Yong, Liu, et al. (2024). iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. In *International Conference on Learning Representations*.\\n\\n[4] Nie, Yuqi, et al. (2023). A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. In *International Conference on Learning Representations*.\\n\\n[5] Wu, Haixu et al. (2023). TimesNet: Temporal 2D-variation modeling for general time series analysis. In *International Conference on Learning Representations*.\\n\\n[6] Wang, Yuxuan, et al. (2024). Deep Time Series Models: A Comprehensive Survey and Benchmark. In *Transactions on Pattern Analysis and Machine Intelligence*.\\n\\n[7] Shi, Xiaoming, et al. (2025). Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts. In *International Conference on Learning Representations*.\\n\\n[8] Alexey Dosovitskiy, et al. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In *International Conference on Learning Representations*.\\n\\n[9] Rico Sennrich, et al. (2016). Neural Machine Translation of Rare Words with Subword Units. In *Annual Meeting of the Association for Computational Linguistics*.\\n\\n[10] Ashish Vaswani, et al. (2017). Attention Is All You Need. In *Conference on Neural Information Processing System*.\", \"title\": \"Part 2: Thank you for your thoughtful comments.\"}",
"{\"title\": \"Response to Reviewer 9h69 [Part 2]\", \"comment\": \"> **W2:** The paper barely offers insights both theoretically and experimentally. The theoretical understanding of the improvement as well as its time imaging and multi-resolution mixing is lacking.\\n\\nWe would like to **clarify a potential misunderstanding** and **re-emphasize the position of our work within the literature**.\\n\\n- Our primary objective in this paper is to propose a novel pattern extraction model for general time series analysis, which we refer to as the **Time Series Pattern Machine (TSPM)**, as introduced on line 39 in the original paper. **We have not made any claims regarding theoretical contributions in this submission. Instead, our focus is on introducing a practical and innovative framework that empirically advances the state of the art across eight diverse time series analysis tasks.**\\n\\n- The core of our approach lies in disentangling seasonality and trend patterns from multi-resolution time images using Time Image Decomposition (TID), followed by Multi-Scale Mixing (MCM) and Multi-Resolution Mixing (MRM). **This design achieves empirically significant improvements and establishes new benchmarks across various tasks, contributing to the broader time series analysis literature.**\\n\\n- Moreover, our model is **inspired by established theories** in time series analysis and signal processing, particularly **multi-resolution analysis**, which is widely used to decompose signals into components that capture variations across scales. This theoretical framework, rooted in techniques like wavelet transforms and multi-scale signal processing, serves as the foundation for our approach. By introducing **the concept of multi-resolution time imaging**, we transform 1D multi-scale time series into 2D images, enabling a **structured disentanglement of seasonal and trend components in latent spaces**. This enables TimeMixer++ to effectively capture global and localized patterns in a way that is well-grounded in signal processing principles.\\n\\nTo place our work in context, **iTransformer** and **TimesNet**, two well-established and state-of-the-art benchmark models in the literature, provide strong baselines for comparison. **To highlight the experimental contributions of TimeMixer++, we present a concise summary of the average performance of these models across multiple tasks**, demonstrating the robustness and adaptability of TimeMixer++.\\n\\n| Method | Long term Forecasting | | Uni. Short Term Forecasting | Mul. Short Term Forecasting | Imputation | | Few-Shot | | Zero-Shot | | Classification | Anomaly Detection |\\n|---------------|-----------------------|------------------|-----------------------------|------------------|-----------------------------|------------------|------------|------------------|----------|------------------|-----------|------------------|\\n| | MSE | MAE | SMAPE | MAPE | MSE | MAE | MSE | MAE | MSE | MAE | ACC (%) | F1-Score (%) |\\n| TimeMixer++ | **0.286** | **0.301** | **11.448** | **10.08** | **0.063** | **0.149** | **0.332** | **0.371** | **0.386**| **0.408** | **75.9** | **87.47** |\\n| TimesNet | 0.363 | 0.347 | 11.829 | 12.69 | 0.085 | 0.180 | 0.491 | 0.446 | 0.527 | 0.465 | 73.6 | 86.34 |\\n| iTransformer | 0.310 | 0.305 | 12684 | 12.55 | 0.103 | 0.191 | 0.394 | 0.442 | 0.444 | 0.434 | 70.5 | 76.98 |\", \"we_can_have_the_following_observations\": \"- These results provide compelling evidence of the effectiveness of TimeMixer++ in **capturing diverse time series patterns**. \\n- **TimeMixer++ achieves consistent improvements over TimesNet and iTransformer, underscoring its contribution to advancing the state of the art in general time series analysis**. \\n\\nFurthermore, we hope that the empirical insights presented in Figure 1 (right) will **inspire the research community** to further explore the TSPM paradigm through **the lens of representation learning**. By harnessing its **strong pattern extraction capabilities**, TimeMixer++ **exhibits robust adaptability across tasks**, as demonstrated by the comprehensive representation learning experiments in Appendix E of the _$\\\\underline{\\\\text{revised paper}}$_. These experiments provide valuable insights into why TimeMixer++ performs effectively across diverse tasks, highlighting its ability to adapt learned representations to task-specific requirements.\\n\\nWe hope this additional information addresses the reviewer\\u2019s concerns.\"}",
"{\"title\": \"Response to Reviewer e5Jj [Part 1]\", \"comment\": \"We sincerely appreciate reviewer e5Jj for considering our work is novel and solid, and we greatly appreciate the acknowledgement of our contributions. We have addressed the specific concerns raised by the reviewer as detailed below:\\n\\n> **W1:** \\\"There is little exploration of scaling of model size, which would be an interesting avenue for validating the model architecture in a zero shot setting. The current zero-shot experiments are primarily in-domain and not cross-task.\\\"\\n\\n(1) We appreciate the reviewer\\u2019s insightful suggestions regarding the scaling of model size. As highlighted in Appendix L, exploring the scalability of TimeMixer++ is a direction for future work. In this study, we introduced a powerful backbone model as an initial step towards building a universal time-series pattern machine (TSPM).\\n\\n(2) We also greatly appreciate your valuable suggestion regarding the zero-shot setting. In response to your concern, we **conducted additional experiments on two well-established cross-domain datasets, M3 and M4, under zero-shot conditions.** The results are summarized below:\\n\\n| Method | **TimeMixer++** | FPT | DLinear | PatchTST | AutoTimes | TimesNet | Nsformer | FEDformer | Informer | Reformer |\\n|---------------|-----------------|-------|---------|----------|-----------|---------|----------|-----------|----------|---------|\\n| **M4 \\u2192 M3** | **12.49** | 13.06 | 14.03 | 13.06 | 12.75 | 14.17 | 15.29 | 13.53 | 15.82 | 13.37 |\\n| **M3 \\u2192 M4** | **12.76** | 13.13 | 15.34 | 13.23 | 13.04 | 14.55 | 14.33 | 15.05 | 19.05 | 14.09 |\\n\\n\\nM4 \\u2192 M3 means training the model on the datasets of M4 and then evaluating on M3, and vice versa. **TimeMixer++ demonstrates the best performance across both tasks**, outperforming other methods and showcasing its superior ability to generalize temporal patterns without task-specific training. \\nTimeMixer++ consistently delivers the lowest errors, with **improvements ranging from 2% to 33%** compared to competing methods. All of these results have been included in Table.13 (Appendix.D) of the _$\\\\underline{\\\\text{revised paper}}$_.\\n\\n> **Q1:** \\\"The proposed architecture adds significant computational cost to the internal representation of the model compared to vanilla transformers ... Have the authors conducted any studies to compare the inference-time cost of TM++ compared to other methods?\\\"\\n\\nThank you for your insightful comments and suggestions regarding the inference-time cost of TimeMixer++. We greatly appreciate your feedback.\\n\\n**(1). Theoretical Analysis of Time Complexity**\\n\\nAssuming the input time series has a length of $T$ and a channel count of $C$ (with $C \\\\ll T$ in practice):\\n\\n- **Vanilla Transformer**: The time complexity is $O(T^2)$ due to the application of full attention along the temporal dimension.\\n- **TimeMixer++ Input Projection**: TimeMixer++ applies channel-wise full attention only in the input projection step for channel mixing, which has a time complexity of $O(C^2)$.\\n- **Stacked MixerBlock in TimeMixer++**: In the stacked MixerBlock, TimeMixer++ transforms the input time series into time images for more efficient modeling. Specifically, rather than employing full attention, we utilize **efficient dual-axis attention with a time complexity of $O(T \\\\sqrt{T})$** for seasonal-trend decomposition, which generates seasonal and trend images. These images are then processed with efficient convolutional methods. As shown in Figure 13, by avoiding the use of full attention along the temporal dimension, we achieve improvements in training efficiency.\\n\\n\\n**(2). Experimental Results: Inference Time Comparison**\\n\\nWe conducted experiments to evaluate inference time on long-term forecasting tasks. Below is a summary of the findings:\\n\\n| Model Name | TimeMixer++ | iTransformer | PatchTST | FEDformer | TimeMixer | TIDE | TimesNet | SCINet |\\n|--------------|-------------|--------------|----------|-----------|-----------|------|----------|--------|\\n| **Inference Time (ms/iter)** | 90 | 130 | 105 | 240 | 90 | 85 | 160 | 150 |\", \"we_can_have_the_following_observations\": [\"TimeMixer++ achieves **90 ms/iter inference time**, matching the performance of TiDE (85 ms/iter).\", \"TimeMixer++ is significantly faster than:\", \"TimesNet (160 ms/iter): **43.75% speedup**\", \"SCINet (150 ms/iter): **40% speedup**\", \"Furthermore, as shown in the appendix F of the _$\\\\underline{\\\\text{revised paper}}$_, TimeMixer++ achieves significantly lower prediction error than baselines at comparable inference speed. For example, on ETTm1, it reduces MSE by 10.4% compared to TiDE, with both achieving 85-90 ms/iter inference time.\", \"All results are provided in the Appendix D of the _$\\\\underline{\\\\text{revised paper}}$_.\"]}",
"{\"comment\": \"Do you have any references for the parameter settings were used in these results?\"}",
"{\"title\": \"Questions regarding the paper\", \"comment\": \"First, thank you for your amazing work! I am interested in applying your model in our domain (medical timeseries), and had a few questions if you don\\u2019t mind.\\n\\n**Questions regarding pretraining** : In Appendix L \\u201cLimitations and Future Work\\u201d, you say TimeMixer++ can be an effective backbone model, and will explore scaling the model. We also want to do this for our domain, but realized that some aspects of TimeMixer++ are not well suited for pretraining and wanted to ask your opinion if you don\\u2019t mind : \\n1. **Do you have any suggestions on what type of SSL pretext task could be used?** \\n - As far as I know, current timeseries foundation models (e.g., MOIRAI, TimeMOE, Timer-XL) use GPT/BERT-like pretext tasks for pretraining. However, because your model don\\u2019t use patches (tokens), using GPT/BERT-like pretext tasks seems impossible. Do you have suggestions on how to resolve this (either by modifying the model or choosing a different pretext task?)\\n - Also, it seems that current time-series foundation models only use single patch sizes as inputs. As an outsider this perplexes me. If possible, could you please tell me why? Your paper and others seem to argue that using single sized tokens to represent timeseries is not ideal, yet many foundation models use tokens.\\n2. **How would you modify your model to adapt to varying number of input variables?**\\n - Due to how input projection is designed, it seems that the model cannot be trained on datasets with different number of channels. Do you have any suggestions on how to resolve this?\\n - Also, was there a reason why you mixed the channels early (during input projection from $C$ to $d_{model}$), and keep the channel dimension and model them in the MixerBlock?\\n3. **Do you think TimeMixer++ could adapt to different input lengths?**\\n - As far as I understand, it seems that at least computationally the model can be applied to different input lengths. Is my understanding correct? Do you think that fine-tuning the model to a task with much shorter input length compared to the pretraining dataset will be ok? \\n \\n\\n\\n**Questions regarding the model itself :**\\n\\n1. **Would your model still work well if the data always has FFT amplitudes with a 1/f trend?** : Due to the inherent 1/f trend in our data, selecting the top-k frequencies during MRTI would always return the k lowest frequency values, making the model ignore higher frequency features (which are crucial for certain classification tasks in our field). Do you think it will be OK for us to use a different top-k frequency selection method (e.g., selecting the top-k after detrending the 1/f component)? Or should we stick to the original method? \\n2. **Is $p_k$ actually $p_{k,m}$? :** It seems that $p_k$, obtained from running FFT and picking the Top-K periods is the value with respect to the scale $M$. Wouldn't it be that for different scales the same period length would be different number of timepoints (for example, $p_k$ obtained in scale $M$ should equal $2p_k$ in scale $M-1$?)\\n3. **Details on the 2D convolutions you used in Time Image Decomposition and Multiscale Mixing? :** Could you please tell the kernel sizes and such that were used for the 2D convolutions you used in your model? It would greatly help in understanding your paper!\\n\\nThank you!\"}",
"{\"comment\": \"Dear Kashif,\\n\\nThank you very much for your interest in our work and for your constructive comments. We greatly appreciate your thoughtful questions and suggestions, which have provided us with valuable insights. Below, we address your questions in detail:\\n\\n**1. Regarding the inputs to the mixer block and the notation of time series representations**\\n\\nThank you for pointing this out. You are correct that the inputs to the mixer block are time-series representations with the shape $\\\\mathbb{R}^{\\\\lfloor T/2^M \\\\rfloor \\\\times d_{\\\\text{model}}}$, and $x_m^l$ in line 243 also represents a time series with the same shape. We refer to these as \\\"time series\\\" because they retain the temporal dimension, consistent with conventions in prior works such as Autoformer[2]. Specifically, the \\\"1D time series\\\" terminology in line 243 follows the usage in TimesNet[4], where a time series with the shape $\\\\mathbb{R}^{T \\\\times C}$ is described as a 1D time series.\\n\\nWe sincerely are grateful for your suggestion, as we recognize that this terminology might lead to unnecessary ambiguity or misunderstanding. To improve clarity, we will revise the phrasing to \\\"time-series representation\\\" in the final version of our paper. Thank you for bringing this to our attention.\\n\\n**2. On the motivation for decomposing representations from the channel attention layer**\\n\\nThe channel attention layer in our model serves as part of the input projection, similar to the input projection embedding approaches introduced since Informer[1]. In these methods, the channel dimension of multivariate time series is embedded, while the temporal and channel dimensions remain orthogonal. Even after embedding, the temporal variation characteristics are preserved, and the decomposition primarily focuses on the temporal dimension[6].\\n\\nOur motivation for adopting decomposition stems from the successes of prior works such as Autoformer[2], Fedformer[3], and MICN[5], which have demonstrated the effectiveness of decomposition in capturing temporal patterns in deep spaces[6]. While our work builds upon these ideas, we acknowledge that decomposition methods in deep learning for time series remain an open area of research with significant potential for further exploration.\\n\\n**3. Comparison with prototypical mixer architectures**\\n\\nIndeed, pioneering works like MLP-Mixer have inspired our exploration of mixer architectures for time series. For handling 2D time series, we are currently experimenting with more straightforward approaches. Your observation is insightful, and we are aware of recent works leveraging ViT for 2D time series[7]. This promising direction highlights the potential for deeper intersections between time-series and CV fields, paving the way for multimodal methods that seamlessly integrate time-series and image modalities and opening new possibilities for multivariate forecasting.\\n\\nOnce again, thank you for your thoughtful observations and for taking the time to share your suggestions. Your insights have been truly inspiring, and we sincerely appreciate your engagement.\\n\\nBest regards,\\n\\nAuthors.\\n\\n[1] Zhou, Haoyi, et al. (2021). Informer: Beyond efficient transformer for long sequence time-series forecasting. In *Proceedings of AAAI Conference on Artificial Intelligence*.\\n\\n[2] Wu, Haixu, et al. (2021). Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. *Advances in Neural Information Processing Systems*.\\n\\n[3] Zhou, Tian, et al. (2022). FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In *Proceedings of the 39th International Conference on Machine Learning*.\\n\\n[4] Wu, Haixu, et al. (2023). TimesNet: Temporal 2D-variation modeling for general time series analysis. In *International Conference on Learning Representations*.\\n\\n[5] Wang Huiqiang, et al. (2023). MICN: Multi-scale local and global context modeling for long-term series forecasting. In *International Conference on Learning Representations*.\\n\\n[6] Wang Yuxuan, et al. (2024). Deep Time Series Models: A Comprehensive Survey and Benchmark. In *Transactions on Pattern Analysis and Machine Intelligence*.\\n\\n[7] Zhong Siru, et al. (2025). Time-VLM: Exploring Multimodal Vision-Language Models for Augmented Time Series Forecasting. In *arXiv preprint*.\", \"title\": \"Thank you for your constructive comments.\"}"
]
} |
1CIUkpoata | 6D Object Pose Tracking in Internet Videos for Robotic Manipulation | [
"Georgy Ponimatkin",
"Martin Cífka",
"Tomas Soucek",
"Médéric Fourmy",
"Yann Labbé",
"Vladimir Petrik",
"Josef Sivic"
] | We seek to extract a temporally consistent 6D pose trajectory of a manipulated object from an Internet instructional video. This is a challenging set-up for current 6D pose estimation methods due to uncontrolled capturing conditions, subtle but dynamic object motions, and the fact that the exact mesh of the manipulated object is not known. To address these challenges, we present the following contributions. First, we develop a new method that estimates the 6D pose of any object in the input image without prior knowledge of the object itself. The method proceeds by (i) retrieving a CAD model similar to the depicted object from a large-scale model database, (ii) 6D aligning the retrieved CAD model with the input image, and (iii) grounding the absolute scale of the object with respect to the scene. Second, we extract smooth 6D object trajectories from Internet videos by carefully tracking the detected objects across video frames. The extracted object trajectories are then retargeted via trajectory optimization into the configuration space of a robotic manipulator. Third, we thoroughly evaluate and ablate our 6D pose estimation method on YCB-V and HOPE-Video datasets as well as a new dataset of instructional videos manually annotated with approximate 6D object trajectories. We demonstrate significant improvements over existing state-of-the-art RGB 6D pose estimation methods. Finally, we show that the 6D object motion estimated from Internet videos can be transferred to a 7-axis robotic manipulator both in a virtual simulator as well as in a real world set-up. We also successfully apply our method to egocentric videos taken from the EPIC-KITCHENS dataset, demonstrating potential for Embodied AI applications. | [
"6DoF pose estimation",
"robotic manipulation from video"
] | Accept (Poster) | https://openreview.net/pdf?id=1CIUkpoata | https://openreview.net/forum?id=1CIUkpoata | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"waWWJR5zBv",
"oXljdnQBkY",
"mdqpxl6M1E",
"mW2SDyQeGo",
"hjdEVUibdX",
"hb9uoq9deA",
"eTQdoTkw10",
"aOEXvIKqXn",
"Y0k4BcpTyK",
"RpU41DEmy3",
"OEAqMHok44",
"LlzXuhBBEw",
"LjbU6nZAww",
"LRiKVwj99l",
"HWIQdYQTnk",
"GcWQOeKXWg",
"69APGsSoKZ",
"5qq9TZPd4G",
"3MTr8u7xBW",
"3HLv5KrbZl",
"0m58ucuesg"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_review"
],
"note_created": [
1732197693195,
1732198635375,
1733003386171,
1734420867838,
1732574465624,
1732197734406,
1730613985414,
1732199765804,
1732198720815,
1732199840538,
1730647337464,
1732198729780,
1732481168059,
1733003371279,
1732198677075,
1732583547021,
1732539704725,
1732199898059,
1730660194108,
1737523894852,
1730352559562
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Area_Chair_yotE"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_4DS1"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_4DS1"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_JJFi"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_x6Vw"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_TaH5"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_JJFi"
],
[
"ICLR.cc/2025/Conference/Submission8215/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_x6Vw"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8215/Reviewer_TaH5"
]
],
"structured_content_str": [
"{\"comment\": \"### **Weaknesses**\\n> The authors demonstrate that compared to model-based methods, whose performances suffer from the inaccurate CAD mode, their method addresses the challenge. However, there is lack of experiments compared to SOTA model-based methods with their fetched CAD models (e.g. FoundationPose with their retrieved CAD model).\\n\\n**Response:** \\nWe compared our approach against state-of-the-art methods from the BOP challenge in the \\u201cunseen-objects\\u201d category. Our target application is YouTube videos, where depth measurements are not accessible, so we limit our comparison to RGB-only inputs. The top methods from the BOP challenge are Co-op (GenFlow based) \\\\+ GenFlow \\\\[A\\\\], GigaPose \\\\[B\\\\], FoundPose \\\\[C\\\\], and MegaPose \\\\[D\\\\]. Please note that FoundationPose is not part of the list as it is RGBD-based.\\n\\nUnfortunately, the open-source implementation of Co-op/GenFlow is not available. In the paper, we show a comparison with GigaPose and MegaPose (see Tab. 1). At the time of submission, the open-source implementation of FoundPose was not available, so we verified one of its key ideas (the Bag-of-Words approach) on our method and observed that it did not perform well with inaccurate CAD models. The FoundPose code is now available, and we evaluate it in the table below. It can be seen that while the implementation, which includes their translation estimation, improves the results, FoundPose still suffers from inaccuracies in the mesh compared to our proposed approach. We would be happy to compare with any other publicly available approach. \\n\\n\\\\[A\\\\] Moon, Sungphill, et al. \\\"Genflow: Generalizable recurrent flow for 6d pose refinement of novel objects.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024\\\\. \\n\\\\[B\\\\] Nguyen, Van Nguyen, et al. \\\"Gigapose: Fast and robust novel object pose estimation via one correspondence.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024\\\\. \\n\\\\[C\\\\] \\u00d6rnek, Evin P\\u0131nar, et al. \\\"Foundpose: Unseen object pose estimation with foundation features.\\\" *European Conference on Computer Vision*. Springer, Cham, 2025\\\\. \\n\\\\[D\\\\] Labb\\u00e9, Yann, et al. \\\"MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare.\\\" *CoRL 2022-Conference on Robot Learning*. 2022\\\\.\\n\\n| | YCB-Video | | | | HOPE-Video | | | |\\n| :---- | ----- | :---- | :---- | :---- | ----- | :---- | :---- | :---- |\\n| Method | AR | AR$_{CoU}$ | AR$_{CH}$ | AR$_{pCH}$ | AR | AR$_{CoU}$ | AR$_{CH}$ | AR$_{pCH}$ |\\n| MegaPose (w/o refiner) | 23.75 | 10.08 | 10.65 | 50.53 | 31.77 | 9.96 | 6.87 | 78.50 |\\n| MegaPose | 25.76 | 14.01 | 11.91 | 51.37 | 33.03 | 13.07 | 6.38 | 79.64 |\\n| GigaPose | 29.18 | 11.90 | 9.20 | 66.45 | 23.12 | 4.15 | 4.90 | 60.30 |\\n| FoundPose (full) | 42.95 | 35.40 | 15.69 | 77.75 | 42.30 | 31.18 | 9.58 | 86.13 |\\n| Ours | **49.86** | **45.20** | **18.53** | **85.83** | **45.98** | **39.21** | **10.72** | **88.01** |\\n\\n> In the 6D pose alignment part, the method applies a sapling-based trajectory to get the rotation, which potentially limits the accuracy of the rotation. In the results figure, there are some rotation errors, not sure if due to the sampling-based strategy or the DINO feature extractor.\\n\\n**Response:** \\nTo examine whether rotation errors are caused by sampling, we perform a new ablation study on YCB-V dataset in which we increase the number of sampled views in the utilized sampling strategy \\\\[SuperFib\\\\]. In our method, we use N \\\\= 600 samples, which, on average, lead to a \\\\~25-degree geodesic error between the closest rotations. When increasing N to 1200 or 1800, we observe that the overall pipeline performance remains similar (within a statistical error), while the computational requirements for storage and runtime increase linearly. Our prerendered mesh database takes \\\\~1TB of disk space for N=600 and \\\\~2 TB for N=1200 views and \\\\~3 TB for N \\\\= 1800 views. The runtime also scales linearly from \\\\~0.2s per object to \\\\~0.4s per object for 1200 views and \\\\~0.6s per object for 1800 views.\\n\\n| N$_{samples}$ | AR | AR$_{CoU}$ | AR$_{CH}$ | AR$_{pCH}$ | Avg Err |\\n| :---- | :---- | :---- | :---- | :---- | :---- |\\n| 600 | 49.86 | 45.20 | 18.53 | 85.83 | \\\\~25 deg |\\n| 1200 | 49.66 | 44.64 | 19.10 | 85.24 | \\\\~20 deg |\\n| 1800 | 49.10 | 43.95 | 18.25 | 85.11 | \\\\~16 deg |\\n\\n\\\\[SuperFib\\\\] Marc Alexa, Super-Fibonacci Spirals: Fast, Low-Discrepancy Sampling of SO(3), CVPR2022\"}",
"{\"comment\": \"### **Weaknesses**\\n\\n> l. 197ff, CAD model retrieval by rendering views and calculating visual features seems expensive in both, the database generation and the retrieval stage for large datasets such as Objaverse-LVIS. What is the retrieval time for these datasets and how is it implemented to make retrieval efficient?\\n\\n**Response:** \\nWhile it is true that rendering and extraction of visual features is expensive, this process has to be done only once. The Objaverse-LVIS dataset consists of \\\\~50,000 objects, and the whole dataset can be rendered in \\\\~3 days on one 8-GPU node. Extraction of visual features for retrieval then takes around \\\\~3 days on 8-GPU node as well. In practice, we used the HPC cluster to parallelize and speed up the rendering and extraction process significantly.\\n\\nThe retrieval process is done by matching a single 1024D FFA descriptor extracted from a query image to a database of \\\\~50,000 1024D descriptors (one descriptor for one object) by means of the dot product. This step takes a fraction of a second on a single GPU. The dense features for matching the views are then computed on the fly, and they take around \\\\~0.2s per object on AMD MI250X GPU to extract and match 600 CAD model views to the query image.\\n\\n> l. 220ff proposes to retrieve rotation by matching to a set of rendered views. What is the choice of N in the experiments? What is the avg/std angular distance between sampled rotations?\\n\\n**Response:** \\nFor rotation sampling, we use the strategy described in \\\\[SuperFib\\\\]. We used 600 samples, which resulted in an average geodesic distance error of 25 degrees between the closest rotations, with a standard deviation of 2 degrees. We will include this missing parameter value in the revised paper. We also perform a new ablation study in which we increase the number of samples N to 1200 or 1800\\\\. We observe that the overall performance remains similar, while the computational requirements for storage and runtime increase linearly. Our prerendered mesh database takes \\\\~1TB of disk space for N=600 and \\\\~2 TB for N=1200 views and \\\\~3 TB for N \\\\= 1800 views. The runtime also scales linearly from \\\\~0.2s per object to \\\\~0.4s per object for 1200 views and \\\\~0.6s per object for 1800 views.\\n\\n| N$_{samples}$ | AR | AR$_{CoU}$ | AR$_{CH}$ | AR$_{pCH}$ | Avg Rot. Err | Std. Dev. Rot. Err |\\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| 600 | 49.86 | 45.20 | 18.53 | 85.83 | \\\\~25 deg | \\\\~2 deg |\\n| 1200 | 49.66 | 44.64 | 19.10 | 85.24 | \\\\~20 deg | \\\\~1 deg |\\n| 1800 | 49.10 | 43.95 | 18.25 | 85.11 | \\\\~16 deg | \\\\~1.4 deg |\\n\\n\\\\[SuperFib\\\\] Marc Alexa, Super-Fibonacci Spirals: Fast, Low-Discrepancy Sampling of SO(3), CVPR2022\\n\\n> l. 243ff, the way to prompt the LLM in the supplementary is an offline procedure to collect size estimates for approximately 2200 objects. In the main paper, the description reads as if the LLM is prompted for each detected object using the CLIP text classification. Please describe this more clearly. What if the detected object is not included in the offline calculated set?\\n\\n**Response:** \\n\\nWe apologize for the confusion. The LLM prompting happens offline, as we use a fixed set of text labels and corresponding scales for all images and detected objects. After generating the description-scale pairs with the LLM, we pre-extract CLIP features from the text descriptions and store them with their corresponding scales. During inference, we compute CLIP features for the detected objects in the images and retrieve the best-matching text descriptions along with their scales, as detailed in the supplementary material in section A.3.\\n\\nImportantly, our set of generated text descriptions is not tailored to any specific scene type. Instead, it is designed to encompass a wide range of everyday objects. This set can be easily adapted to specific use cases by generating a new set of descriptions using a modified prompt.\\n\\nIf the generated set lacks text descriptions closely matching the object in the image, the retrieved scales may contain some inaccuracies. However, our method is inherently robust to a certain level of scale error. First, we retrieve multiple matching text descriptions for each detected object and apply median aggregation. Second, even if the aggregated scales are inaccurate (e.g., due to the absence of a closely matching description in the set), these scales are not used directly. Instead, they contribute to computing the global correction factor \\u03c1, which is applied to the relative scales derived from the predicted depth map, as explained in the \\u201cGlobal Rescaling\\u201d paragraph of section A.3 in the supplementary material. Thus, when the image contains enough objects covered by the generated set, these objects (with correctly estimated scales) dominate the estimation of the global correction factor, leading to overall improved scale estimates.\"}",
"{\"comment\": \"**Results.** We used the annotated videos and defined metrics to compute the rotation, projected translation, and scaled depth errors for our method and the MegaPose baseline, as shown in the table below. This initial small-scale evaluation demonstrates that our method outperforms the MegaPose baseline in all dimensions. Additionally, our pose-tracking approach improves performance compared to our per-frame evaluation by introducing temporal filtering through tracklets.\\n\\nIt is worth noting that the projected translation errors are identical for MegaPose (coarse) and our method (per-frame), as both approaches use the same technique for translation estimation. However, our pose-tracking approach slightly improves the translation error compared to the MegaPose refiner, which suffers from inaccuracies when its assumption of an identical mesh is violated.\\n\\nThese preliminary results from the small-scale evaluation suggest that our approach is superior. However, due to limited time and the time-consuming nature of the annotation process, we were unable to annotate additional videos for the rebuttal. To strengthen this quantitative evaluation of our method, we are currently working on expanding the dataset to include 20 Internet videos of humans performing actions with everyday objects. This will enable us to compare our method more robustly. We will also compare against other baselines used in the submitted manuscript. Overall, we plan to annotate ~100 Internet videos to form an open-source test set to enable measuring progress on this hard problem.\\n\\n| | MegaPose Coarse (per-frame) | MegaPose Coarse+Refine (per-frame) | Ours (per frame) | Ours (pose tracking) |\\n| :---- | :---- | :---- | :---- | :---- |\\n| Average relative rotation \\\\[deg\\\\] | 59.61 | 58.14 | 19.17 | 16.03 |\\n| Average relative projected translation \\\\[px\\\\] | 28.08 | 45.07 | 28.08 | 26.53 |\\n| Average relative scaled depth | 1.38 | 1.39 | 1.38 | 0.98 |\"}",
"{\"metareview\": \"The paper introduces a novel framework for extracting temporally consistent 6D pose trajectories of manipulated objects from internet videos without requiring CAD models. The authors address the challenges by combining CAD retrieval, pose estimation, and trajectory optimization techniques. Experiments on video datasets demonstrate state-of-the-art performance. The real-world robotic manipulation demo further validates the practical utility of the approach. Reviewers consistently agree with the novelty of the pipeline, its practical significance, and its robust evaluation. During the discussion, concerns from the reviewers have been resolved with further details.\", \"additional_comments_on_reviewer_discussion\": \"There were several rounds of discussions, and most of the concerns from the reviewers were addressed. For example, 1) the authors clarified the comparison with state-of-the-art models, highlighted the advantages of their approach; 2) the authors demonstrated the effective use of a CAD model. Additionally, the authors responded to various questions raised by reviewers to improve the paper's quality in the revision.\"}",
"{\"comment\": \"The authors addressed most of my concerns in the rebuttal phase, and thus I would like to raise my score to 6.\"}",
"{\"comment\": \"> For the robotics demo, the end-effector position control is on 6D pose or only on the rotation? From the Figure 9, the translation of the end-effector seems not consistent with the original video and in the simulator\\n\\n**Response:** \\n\\nThank you for letting us clarify this issue. Please note that the input video, robot simulation, and real-robot videos were captured from different camera viewpoints. The same robot trajectory is executed in the simulation and by the real robot.\\n\\nThe robot trajectory is always computed to closely imitate the full 6D pose of the detected object from the video. We only manually transform the trajectory from the camera frame into the robot frame to account for differences in scale and robot position. \\nBecause of this, the perceived differences between the estimated object trajectory from the video, the object trajectory executed by the robot in the simulation and the object trajectory executed by the real robot are due to the differences of the viewpoints used for the visualization. We hope this clarifies the issue. \\n\\n\\n### **Questions**\\n\\n> Why in Figure 1 and Figure 2, the same image has two different retrieved CAD models?\\n\\n**Response:** \\nWe apologize for any confusion caused. Our intention with the overview figure (Fig. 2\\\\) was solely to illustrate the entire pipeline. As such, we have used one of the visually pleasing meshes among the top retrieved meshes instead of using the top retrieval from the Objaverse database. In the revised version of the paper, we will incorporate the retrieved mesh to prevent any potential future misunderstandings.\\n\\n> Can you provide the results of the error based on the quality of the retrieved CAD model?\\n\\n**Response:** \\n\\nIn general, it is difficult to define the quality of the retrieved CAD model. As a proxy, we compare the quality of the recovered pose (i) using the retrieved mesh and (ii) using the ground truth exact mesh (which is known for the standard datasets). This allows us to approximately analyze the metric sensitivity to the quality of the CAD model. Results are shown in Table 2 in the main paper, which is also shown below for your convenience. The last row of the table presents the average recall for the Oracle across various metrics (columns). The Oracle represents a scenario where both the mesh model and its corresponding scale are known from ground truth. The other rows present different methods for mesh retrieval.\\n\\nThe CoU (complement over union) and pCH (projected chamfer distance) metrics measure projected information, which is not significantly affected by depth. The Oracle mesh performs the best, with a reasonable decrease in performance for the other retrieval methods. This roughly illustrates the sensitivity of the metrics to the quality of the mesh.\\n\\nThe remaining metric, CH (Chamfer distance), measures the distance between vertices in 3D space. A significant drop in performance is observed for all methods compared to the Oracle. This drop is primarily caused by the scale estimation method, which affects the distance between the object and the camera.\\n\\n| Retrieval | AR | AR$_{CoU}$ | AR$_{CH}$ | AR$_{pCH}$ |\\n| :---- | :---- | :---- | :---- | :---- |\\n| (a) OpenShape | 34.51 | 20.25 | 10.59 | 72.69 |\\n| (b) Ours (CLS) | 45.05 | 40.93 | 16.16 | 78.06 |\\n| (c) Ours | 49.86 | 45.20 | 18.53 | 85.83 |\\n| (d) Oracle | 62.93 | 51.99 | 45.95 | 90.85 |\"}",
"{\"summary\": \"The authors present a novel approach to extract temporally consistent 6D pose trajectories of manipulated objects from Internet videos to be applied with robotic manipulation task. It tackles the challenges posed by uncontrolled capture conditions, unknown object meshes, and complex object motions. Their evaluation on YCB-V and HOPE-Video datasets shows state-of-the-art performance, with successful motion transfer to a robotic manipulator in both simulated and real-world settings.\\n\\n----------------------------------------------------------------------------------------------------\\nThe authors addressed most of my concerns in the rebuttal phase, and thus I would like to raise my score to 6.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The impact of the paper is dominant in the way that it provides an envision of enriched data for robotic manipulation without human labor force to construct the specific datasets. The methodology is intuitive and the performance enhancement is non-trivial. The paper is overall well-written.\", \"weaknesses\": \"My primary concern lies with the methodological novelty, as the approach largely involves applying an existing pipeline to internet videos. Specifically, the use of an LLM for estimating object scale may be questionable, given potential uncertainties around its accuracy in providing a realistic scale for each object. Aside from this, the methodology essentially adapts previous methods to fit the proposed pipeline. Given these factors, I feel this work might not align with ICLR's focus but could be more suited to a robotics conference.\", \"questions\": \"1. It might be great if the authors could ablate on the performance variation under different LLMs. Currently it only applies GPT-4, but it is important to know how different LLMs might influence the performance (i.e. one GPT-3.5 & one open-source LLM).\\n2. What's the efficiency & cost of such pipeline when performing inference on a 1-minute Instructional videos? \\n3. Using a CAD model can be costly since it requires a large database to store predefined meshes, and in open-world scenarios, finding an exact match is often unlikely. However, numerous approaches avoid relying on CAD models. For instance, \\\"6DGS: 6D Pose Estimation from a Single Image and a 3D Gaussian Splatting Model\\\" [ECCV 2024]. Have you tried experimenting with such methods? Or say, how do you envision those methods' strengths and weaknesses compared to your method.\\n4. For the standard evaluation, it might be beneficial to add another dataset evaluation using different cameras, say iPhone sensor as proposed in \\\"Robust 6DoF Pose Estimation Against Depth Noise and a Comprehensive Evaluation on a Mobile Dataset\\\" to further validate the approach's generalizability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"### **Weaknesses**\\n\\n> My primary concern lies with the methodological novelty, as the approach largely involves applying an existing pipeline to internet videos. Specifically, the use of an LLM for estimating object scale may be questionable, given potential uncertainties around its accuracy in providing a realistic scale for each object. Aside from this, the methodology essentially adapts previous methods to fit the proposed pipeline. Given these factors, I feel this work might not align with ICLR's focus but could be more suited to a robotics conference.\\n\\n**Response:** \\nThank you for letting us clarify this concern. Our method focuses on a novel in-the-wild setup of 6D pose estimation, that estimates the pose of the depicted object without prior knowledge of the object itself. While we build on existing work, we have developed an approach that significantly outperforms other methods on this challenging problem. We find it interesting that the existing best 6D pose estimation methods can be significantly outperformed on this problem by our simple yet carefully designed 2D-3D retrieval/matching approach combined with robust scale estimation and 2D-3D tracking. In addition, the issue of scale estimation has not been adequately addressed in the literature, and LLMs provide a viable path to capture prior knowledge about the visual world. Our approach allows us to robustly map that prior knowledge to specific objects in the specific observed scene while carefully addressing the inherent noise and ambiguities in the problem. Although scale estimates for individual objects are noisy, our method aims to robustly mitigate these object-level errors by aggregating information from multiple objects in the entire scene, as explained in the \\u201cGlobal Rescaling\\u201d paragraph of Section A.3.\\n\\n### **Questions**\\n\\n> It might be great if the authors could ablate on the performance variation under different LLMs. Currently it only applies GPT-4, but it is important to know how different LLMs might influence the performance (i.e. one GPT-3.5 & one open-source LLM).\\n\\n**Response:** \\nTo evaluate the dependence of our scale estimation method on the choice of LLM, we conducted an additional ablation study using three other models: one proprietary (GPT-3.5-Turbo) and two open-source models (Llama-3.1-8B and Gemma2-9B). We adapted prompts for each model to ensure the generation of a scale database in the expected format and the coverage of real-world objects.\\n\\nThe results of our experiments are shown below. We observe that GPT-4 outperforms all other models, with Llama-3.1-8B achieving a very similar score. Surprisingly, GPT-3.5-Turbo performs the worst, while the Gemma2-9B model is slightly better than GPT-3.5-Turbo but still lags behind GPT-4 and Llama-3.1-8B. Please note that scale estimation do not affect the projected metrics and therefore $AR_{CoU}$ and $AR_{pCH}$ remain constant.\\n\\n| LLM model | AR | AR$_{CoU}$ | AR$_{CH}$ | AR$_{pCH}$ |\\n| :---- | :---- | :---- | :---- | :---- |\\n| GPT-4 | 49.86 | 45.20 | 18.53 | 85.83 |\\n| GPT-3.5-Turbo | 45.29 | 45.20 | 4.85 | 85.83 |\\n| Llama-3.1-8B | 49.32 | 45.20 | 16.92 | 85.83 |\\n| Gemma2-9B | 46.00 | 45.20 | 6.98 | 85.83 |\\n\\n> What's the efficiency & cost of such pipeline when performing inference on a 1-minute Instructional videos?\\n\\n**Response:** \\nThe method onboarding stage consists of pre-rendering the meshes and extracting the features. When running on a single image, the resulting runtime for detection, retrieval, and scale estimation is \\\\~2s per image. For the pose estimation part, the runtime is \\\\~0.2s per object. However, when running on an instructional video, we run the detector only on the first frame and track the detected objects through the video using \\\\[SAM2\\\\], which can run in real-time. Then we run the retrieval and scale estimation, which can benefit from multi-frame prediction aggregation, but does not necessarily have to use all video frames, especially for longer videos. In practice, we used 30 frames throughout the video for the retrieval and scale estimation. In contrast to single-image estimation, we use \\\\[CoTracker\\\\] combined with PnP to extract smooth poses from the video, with run-time of \\\\~1s per frame. \\n\\n\\\\[SAM2\\\\] Ravi, Nikhila, et al. \\\"Sam 2: Segment anything in images and videos.\\\" arXiv preprint arXiv:2408.00714 (2024). \\n\\\\[CoTracker\\\\] Karaev, Nikita, et al. \\u2018CoTracker: It Is Better to Track Together\\u2019. Proc. ECCV, 2024\\\\.\"}",
"{\"comment\": \"> l. 519ff, the real robot experiment is rather anecdotal and lacks important details in its descriptions and quantitative evaluation (e.g., success rate). How are the observed object trajectories transfered to the real robot experiment incl. considering the change of view point and embodiment? How does the robot know where the manipulated objects are and how is this matched to the observed object motion?\\n\\n**Response:** \\nWe apologize for not describing the real-robot experiment in detail. We will address this in the revised version based on the information provided below.\\n\\n**Moving the object to the starting pose.** In the first step, we manually placed the object into the gripper (fixing the robot gripper-to-object transformation) and moved the object using the robot to the initial pose, computed as $T_{RC} T_{CO}^0$, where $T_{CO}^0$ \\u200bis the object pose relative to the camera. This pose is estimated by our method and corresponds to the first frame of the video. $T_{RC}$ is the virtual camera pose relative to the robot, which was manually chosen to simulate a camera looking at the robot from the front with a 30-degree elevation relative to the gravity axis. This camera pose was manually defined and kept constant across all videos. Using this approach, the object was moved to a pose visually similar to that shown in the videos. However, in practice, this step would not be necessary as the robot would start with the object already grasped in its gripper or a separate grasping process would be called (e.g. using a combination of motion planning and GraspIT/GraspNet)*.*\\n\\n**Following the trajectory from the video:** In the second step, we computed the object's motion. To increase the transferability of the method, we first expressed the object's motion relative to the starting pose of the extracted object trajectory from the video. This relative motion was then applied to the object's pose in simulation (or real life) to derive the reference object trajectory for the robot. Finally, trajectory optimization, as shown in Eq. (4), was solved to obtain the motor torques of the robot to imitate the reference object trajectory\\n\\n**Quantitative evaluation:** Quantitative evaluation is indeed missing, as measuring success rates for in-the-wild object interactions is challenging. In our real-robot experiments, all processed videos were successfully retargeted to the robot, and all object motions resulted in putting some material inside the (manually placed) target object. While this could serve as a measure of success, we felt it was too coarse to report it. Therefore, we limited ourselves to qualitative evaluation for the robotic experiments. However, we would be happy to report it in the revised version of the paper. \\n\\n> Fig. 8, in the upper additional qualitative result, the bowl object pose is not correctly tracked. Why does the robot still turn the object in a quite different angle?\\n\\n**Response:** \\nThank you for letting us clarify this issue. Please note that the input video, robot simulation, and real-robot videos were captured from different camera viewpoints. The same robot trajectory is executed in the simulation and by the real robot.\\n\\nThe robot trajectory is always computed to closely imitate the relative 6D pose transformations of the detected object from the video. We only transform the trajectory from the camera frame into the robot frame to account for differences in scale and robot position as described in response to point above.\\nBecause of this, the perceived differences between the estimated object trajectory from the video, the object trajectory executed by the robot in the simulation and the object trajectory executed by the real robot are due to the differences of the viewpoints.. We hope this clarifies the issue. \\n\\n### **Additional minor comments**\\n\\n> Fig. 6, rightmost real robot image seems to be a repetition of the image next to it. Was the wrong image included?\\n\\n**Response:** \\nThank you for noticing, this was indeed a mistake. We will replace the image with the correct one in the revised version.\\n\\n### **Questions**\\n\\n> l. 323, are the ground-truth meshes contained in the object datasets?\\n\\n**Response:** \\nThe exact ground-truth meshes are most likely not part of Objaverse (at least we did not see them). However, the dataset can either contain other 3D meshes of the same objects, or meshes that are very similar (e.g. Campbell's Primordial Soup instead of the Tomato Soup \\\\- see Figure 3).\\n\\n> Table 1, was the same scale estimate for the meshes used for MegaPose and GigaPose like for the proposed method?\\n\\n**Response:** \\nYes, to consistently compare all 6D pose estimation methods mentioned in Table 1, we always reuse the same object detections and respective scale estimates.\"}",
"{\"comment\": \"> Using a CAD model can be costly since it requires a large database to store predefined meshes, and in open-world scenarios, finding an exact match is often unlikely. However, numerous approaches avoid relying on CAD models. For instance, \\\"6DGS: 6D Pose Estimation from a Single Image and a 3D Gaussian Splatting Model\\\" \\\\[ECCV 2024\\\\]. Have you tried experimenting with such methods? Or say, how do you envision those methods' strengths and weaknesses compared to your method.\\n\\n**Response:** \\nThank you for the question. Even though 6DGS does not need a CAD model of the object, it requires a 3DGS model, i.e. gaussian splatting representation of the object/scene. However, creating the 3DGS model requires having a set of images of a static scene from calibrated cameras with the corresponding camera poses. When running on in-the-wild images or YouTube videos, we do not have this information and thus cannot create the GS model in the same way. In contrast, our method estimates the poses/trajectories of objects in-the-wild, and does not require prior information about the object. We agree that the exact CAD model is often not available even in large-scale object databases like Objaverse, however, our method leverages the power of DINOv2 features to estimate the poses of observed objects even with an approximate CAD model. The intuition behind our approach is that we can estimate the *relative* 6D object transformations from the video (i.e. the object\\u2019s trajectory) even if the matched object from the database is not exact.\\n\\nAt the initial stage of our project we explored potential applications of generation image-to-3D models such as \\\\[zero123-XL\\\\] or \\\\[stable-zero123\\\\], we found that in practice the CAD models generated by those models conditioned on in-the-wild real-world photos are of insufficient quality for our task. This is mainly driven by the fact that lots of query images are taken from varying elevation angles, lightning conditions etc, which makes inconsistent predictions with those models. \\n\\nWe have also experimented with methods such as \\\\[Dust3r\\\\] or \\\\[SplatterImage\\\\] to build 3D object models directly from the input Youtube videos and found that those methods did not produce usable results (often producing severely distorted objects) likely due to the challenging nature of such in-the-wild videos (unusual viewpoints, difficult illumination, occlusions, low resolution, blur, etc). \\n\\n\\\\[zero123-XL\\\\] Ruoshi Liu et al., Zero-1-to-3: Zero-shot One Image to 3D Object, ICCV 2023 \\n\\\\[stable-zero123\\\\] Stability AI,. Stable-Zero123 \\n\\\\[Dust3r\\\\] DUSt3R: Geometric 3D Vision Made Easy, Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, Jerome Revaud, CVPR 2024 \\n\\\\[SplatterImage\\\\] Splatter image: Ultra-fast single-view 3d reconstruction, Stanislaw Szymanowicz, Chrisitian Rupprecht, Andrea Vedaldi, CVPR 2024\\\\. \\n\\n> For the standard evaluation, it might be beneficial to add another dataset evaluation using different cameras, say iPhone sensor as proposed in \\\"Robust 6DoF Pose Estimation Against Depth Noise and a Comprehensive Evaluation on a Mobile Dataset\\\" to further validate the approach's generalizability.\\n\\n**Response:** \\nThank you for suggesting this dataset for evaluating our approach. We converted the DTTD2 dataset into the standardized BOP format and evaluated it using our setup, which uses an RGB-based model-retrieved pose estimation. The results of our method, along with all baseline methods, are presented in the table below. Please note that FoundPose results were obtained using the official source code by the authors of the method. This is in contrast to the submission, where we used our own implementation because the official code was not yet available at the time of the submission.\", \"the_observed_trend_on_this_new_dataset_is_consistent_with_the_other_datasets_we_evaluated_on\": \"our method achieves the highest recall, as baseline methods struggle with inaccuracies in the mesh compared to our approach. Note that we used our model retrieval and scale estimation modules to generate the same scaled mesh for all methods in the table.\\n\\n| Method | $AR$ | $AR\\\\_{CoU}$ | $AR\\\\_{CH}$ | $AR\\\\_{pCH}$ |\\n| :---- | :---- | :---- | :---- | :---- |\\n| MegaPose (w/o refiner) | 26.15 | 13.72 | 6.67 | 58.05 |\\n| MegaPose | 27.84 | 17.73 | 8.77 | 57.00 |\\n| GigaPose | 41.84 | 39.32 | 12.16 | 74.00 |\\n| FoundPose | 37.39 | 31.32 | 11.41 | 69.44 |\\n| Ours | **47.11** | **52.10** | **12.68** | **76.54** |\"}",
"{\"summary\": \"This paper proposes a new approach to detect and track the 6-DoF pose of unknown objects from RGB video. The approach is motivated by robot imitation learning from internet video. The approach uses off-the-shelf open-set object detectors, foundation models for segmentation, vision-language (CLIP), and visual features (DINOv2) to detect objects, retrieve similar shapes from a database of CAD models, and matching the object image with a set of rendered views of the object CAD model to estimate 3D orientation. Experimental evaluation is performed quantititvely on YCB-Video and HOPE-Video datasets and a comparison is made with state of the art object detectors for unseen objects for which the CAD model is assumed known (MegaPose, GigaPose). Also, qualitative results on EPIC-Kitchen, and an example of executing the estimated object trajectories on a real robot are shown.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed approach for detecting and estimating 6D motion of unknown objects from RGB images is novel and interesting.\", \"The paper is well written and easy to follow.\", \"The set of experiments demonstrate the shape retrieval and pose estimation well and also compare with state of the art methods.\", \"A qualitative example is provided with a real robot which show the robot pouring from one object to another.\"], \"weaknesses\": [\"l. 197ff, CAD model retrieval by rendering views and calculating visual features seems expensive in both, the database generation and the retrieval stage for large datasets such as Objaverse-LVIS. What is the retrieval time for these datasets and how is it implemented to make retrieval efficient?\", \"l. 220ff proposes to retrieve rotation by matching to a set of rendered views. What is the choice of N in the experiments? What is the avg/std angular distance between sampled rotations?\", \"l. 243ff, the way to prompt the LLM in the supplementary is an offline procedure to collect size estimates for approximately 2200 objects. In the main paper, the description reads as if the LLM is prompted for each detected object using the CLIP text classification. Please describe this more clearly. What if the detected object is not included in the offline calculated set ?\", \"l. 286, was estimating the motion of the camera relative to the static background evaluated in this work ? Please clarify.\", \"The optimization problem in eq 4 does not provide a description of the used system dynamics model.\", \"l. 361, please write more clearly, that while a similar mesh is known, the retrieved mesh does not exactly correspond to the ground truth mesh which is an assumption used for MegaPose and GigaPose.\", \"Please introduce the pCH metric formally, at least in the supplemental material. The current description is insufficient.\", \"l. 519ff, the real robot experiment is rather anecdotal and lacks important details in its descriptions and quantitative evaluation (e.g., success rate). How are the observed object trajectories transfered to the real robot experiment incl. considering the change of view point and embodiment? How does the robot know where the manipulated objects are and how is this matched to the observed object motion?\", \"Fig. 8, in the upper additional qualitative result, the bowl object pose is not correctly tracked. Why does the robot still turn the object in a quite different angle ?\"], \"additional_minor_comments\": [\"Fig. 6, rightmost real robot image seems to be a repetition of the image next to it. Was the wrong image included?\"], \"questions\": [\"l. 323, are the ground-truth meshes contained in the object datasets?\", \"Table 1, was the same scale estimate for the meshes used for MegaPose and GigaPose like for the proposed method?\", \"Which dynamics model is used for the optimization problem in eq 4? How is tracking of the optimized trajectory implemented?\", \"See additional questions in sec. \\\"Weaknesses\\\".\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"> Which dynamics model is used for the optimization problem in eq 4? How is tracking of the optimized trajectory implemented?\\n\\n**Response:** \\nWe utilized the publicly available dynamic model for the Panda robot from the example-robot-data package, accessible via Conda and PyPI. This package includes the kinematics, as well as the geometric and inertia parameters for each link of the robot. \\n\\nFor forward dynamics computation, we employed the Pinocchio library, which is also internally used within the Aligator trajectory optimization package. The dynamic model was used to compute joint positions and velocities based on the input sequence of joint torques, while the kinematic model was used to determine the object's pose through forward kinematics. Consequently, both the end-effector pose and joint velocities are directly influenced by the optimized torques in Eq. (4).\\n\\nThe robot is controlled in joint space (with 7 revolute joints) in position mode, defined by joint angles. To ensure smooth motion, joint torques were optimized using Eq. (4), and the corresponding joint positions were computed based on the forward dynamics.\"}",
"{\"comment\": \"Thanks for the response from the authors. Most of my concerns are addressed. I will raise my score.\"}",
"{\"comment\": \"Thank you for your suggestion. We completely agree that establishing quantifiable measures of success and a repeatable experimental setup is crucial for making progress on this challenging problem. To address this, we have developed an annotation tool designed to annotate object manipulation in internet videos, providing approximate trajectories for the manipulated objects.\\n\\n**Annotation.** Specifically, our approach involves precomputing the top 25 candidates for mesh retrieval, after which the annotator manually selects the best-fitting mesh for the given video. Symmetry is manually annotated to indicate if the selected object is symmetric (e.g., a bowl), in a similar manner as done in Labbe et al., ECCV 2020 ([https://arxiv.org/abs/2008.08465](https://arxiv.org/abs/2008.08465)). The selected mesh is then manually aligned with the manipulated object in each video frame. During this process, the annotator adjusts the x-y translation, depth, SO(3) rotations, and perspective effects while assuming an approximate scale for the mesh (set at 15 cm for the annotated objects). This methodology ensures an approximate reconstruction of the full SE(3) trajectory for the annotated object, providing a solid foundation for further evaluation. So far, we have used this annotation tool to compute reference trajectories for five Internet videos (altogether \\\\~900 frames), which are presented qualitatively in the submitted manuscript.\\n\\n**Metric.** For the evaluation, we designed metrics that compare the relative transformations of objects over time, ensuring robustness to variations in the starting poses of the motion and emphasizing the nature of the motion rather than its absolute position. The evaluation is split into two components: rotation and translation.\\n\\nTo assess rotation, we compute the spatial angular velocities required to rotate the object from frame \\\\\\\\( k \\\\\\\\) to frame \\\\\\\\( k \\\\+ \\\\\\\\Delta \\\\\\\\). We then compare the angular velocities of the manually annotated reference trajectory with those of the trajectories produced by our method or a baseline. The metric is averaged across all frames \\\\\\\\( k \\\\\\\\) for various values of \\\\\\\\( \\\\\\\\Delta \\\\\\\\), ranging from 1 (to evaluate local consistency) to 50% of the trajectory length (to evaluate global consistency). To deal with symmetries, a minimal error is selected over the annotated symmetries. The spatial velocity is used instead of the body velocity to ensure that the metric is agnostic to the choice of the object\\u2019s coordinate frame, maintaining consistency regardless of variations in the object's body frame orientation or location.\\n\\nFor the translation evaluation, we assess separatelly (i) the spatial velocities of object positions projected onto the image plane and (ii) the scaled depth velocities. We use the projected spatial velocities to factor out the effect of scale/depth ambiguity. For the depth velocity computation, we normalize depth by the scale of the object using 15 cm for the annotated ground truth and the absolute scale estimated by our method for the retrieved mesh. The same averaging approach used for rotation is applied here, averaging over all frames in the video and across various values of \\\\\\\\( \\\\\\\\Delta \\\\\\\\).\"}",
"{\"comment\": \"> l. 286, was estimating the motion of the camera relative to the static background evaluated in this work? Please clarify.\\n\\n**Response:** \\nIn the paper, we focused on computing the object's motion relative to the camera, so we did not explicitly estimate the camera's relative pose. In the revised version, we will make it clear that camera motion is not estimated. However, the assumption of a static camera is still relevant for many use cases, such as third-person view how-to videos.\\n\\n> The optimization problem in eq 4 does not provide a description of the used system dynamics model.\\n\\n**Response:** \\nThank you for bringing this to our attention. We utilized the publicly available dynamic model for the Panda robot from the example-robot-data package, accessible via Conda and PyPI. This package includes the kinematics, as well as the geometric and inertia parameters for each link of the robot.\\n\\nFor forward dynamics computation, we employed the Pinocchio library, which is also internally used within the Aligator trajectory optimization package. The dynamic model was used to compute joint positions and velocities based on the input sequence of joint torques, while the kinematic model was used to determine the object's pose through forward kinematics. Consequently, both the end-effector pose and joint velocities are directly influenced by the optimized torques in Eq. (4). We will explicitly clarify this in the revised version of the paper.\\n\\n> l. 361, please write more clearly, that while a similar mesh is known, the retrieved mesh does not exactly correspond to the ground truth mesh which is an assumption used for MegaPose and GigaPose.\\n\\n**Response:** \\nThank you for pointing this out. While we mention this in the related work, we will emphasize this more clearly also in the experiments section in the revised version of the paper.\\n\\n> Please introduce the pCH metric formally, at least in the supplemental material. The current description is insufficient.\\n\\n**Response:** \\nThank you for this suggestion. We will expand the definitions of all metrics in Appendix B to include precise formal definitions. The pCH metric is a projected Chamfer distance, inspired by the MSPD (Maximum Symmetry-Aware Projection Distance) metric used in the BOP challenge, given by \\n\\n$pCH = \\\\\\\\frac{1}{|M_{pred}|} \\\\sum_{x \\\\in M_{pred}}min_{y \\\\in M_{GT}}||\\\\pi(x, K, T) - \\\\pi(y, K, T_{GT})||_2^2$\\n\\n$~~~~~~~~~+ \\\\\\\\frac{1}{|M_{GT}|} \\\\sum_{y \\\\in M_{GT}}min_{x \\\\in M_{pred}}||\\\\pi(x, K, T) - \\\\pi(y, K, T_{GT})||_2^2$\\n\\nwhere $K$ is camera intrinsics, $T$ is a predicted pose, $T_{GT}$ is ground truth pose, $M_{pred}$ is a set of points sampled from the predicted mesh, $M_{GT}$ is a set of points sampled from the ground truth mesh, $x$ and $y$ are vertices of the meshes, and function $\\\\pi$ projects 3D vertex into 2D pixel. The core idea of this metric is to evaluate how well the estimated pose visually aligns when projected onto the image plane. As the closest vertex is found for each vertex of other mesh, this metric can be used for non-identical meshes or for symmetric meshes. This approach allows us to assess alignment quality, even when the predicted scale is not exact, since the scale is factored out during the projection.\"}",
"{\"comment\": \"Thank you for your detailed response. The response has successfully addressed most of my concerns. By combining my opinion with the comments of other reviewers, I decide to keep my original score 6.\"}",
"{\"title\": \"Thanks for author response\", \"comment\": \"The author response addressed most of my concerns well.\\n\\nWrt. quantitative evaluation: the result reported in the author response indeed seems too coarse for a scientific paper. Instead, a quantifiable measure of success and a repeatable set of test scenarios should be defined and evaluated.\"}",
"{\"comment\": \"### **Weaknesses**\\n\\n> The original contributions should be expressed more clearly. In the proposed method, various existing methods are employed. It is suggested to clearly distinguish the original contributions in this paper and usage of other methods. Specifically, the first contribution locates in the pose estimation method by retrieving a CAD model, aligning the retrieved CAD model, and grounding the object scale with respect to the scene. The subsequent question is that what is the original contribution, the whole pipeline or the detailed design of a particular module? The authors are suggested to express this more clearly in the revised version. For the second and third contributions, it is also recommended to present more clear expressions.\\n\\n**Response:** \\nThank you for letting us address this concern. The first and main contribution of our the paper lies in designing an end-to-end approach that addresses in-the-wild pose estimation in a completely novel setup without the available exact mesh. While we build on existing work, we have developed an approach that significantly outperforms other methods on this challenging problem. We find it interesting that the existing best 6D pose estimation methods can be significantly outperformed on this problem by our simple yet carefully designed 2D-3D retrieval/matching approach combined with robust scale estimation and 2D-3D tracking. \\n\\nSecond, addressing this hard problem required the development of a new module for object scale estimation, a challenge that has not been adequately explored in the literature, as well as adaptations to the object detection stage. Lastly, we investigated and introduced metrics suitable for comparing different methods, as traditional 6D pose estimation metrics are not applicable in this setup without known exact mesh. We will clarify the novelty of our approach more explicitly in the revised version.\\n\\n> For robotic manipulation, the running time of the pose estimation method is a key factor. The proposed method in the paper is somewhat time-consuming with 2s for detector, retrieval and scale estimation per scene and 0.2s for pose estimation per object. To further improve the paper, two suggestions are given. For one thing, the comparaions with other methods on running time are suggested to add. For another, more analysis about the running time is also preferred, such as the recommendations for accelerate the whole method.\\n\\n**Response:** \\nOur method is primarily an offline method that is meant for offline extraction of manipulation trajectories from videos for (offline) robot learning. Moreover, the detector is used only on the first frame of the video, and the objects are tracked through the video with \\\\[SAM2\\\\], which can run in real time. Then we run the retrieval and scale estimation, which can benefit from multi-frame prediction aggregation, but does not necessarily have to use all video frames, especially for longer videos. In practice, we used 30 frames throughout the video for the retrieval and scale estimation.\\n\\nOur method is being run in BF16 to speed up the inference time. Given that our method processes every object independently, the inference can be parallelized among multiple processes and GPUs. \\n\\nFor runtime comparison, on average, for a single video frame our method takes \\\\~0.2s per object, while MegaPose takes \\\\~4.5s per object, GigaPose takes 0.03s per object and FoundPose around \\\\~0.15s per object. We will include runtimes of all methods used in the paper into the revised manuscript.\\n\\n\\\\[SAM2\\\\] Ravi, Nikhila, et al. \\\"Sam 2: Segment anything in images and videos.\\\" arXiv preprint arXiv:2408.00714 (2024).\\n\\n### **Questions**\\n\\n> With the similar CAD model retrieval, the classification can also be obtained. I wonder if it is possible to use the CAD model to perform classification directly?\\n\\n**Response:** \\nIndeed, there are multiple ways to tackle the CAD model retrieval. In our paper, we explored retrieval based on averaged visual features (\\\\[CLS\\\\] tokens of the DINOv2 model and FFA descriptors constructed from foreground patch tokens of the DINOv2 model). We also add a multi-modal OpenShape model for comparison, which aims to align image-text-mesh triplets in CLIP feature space. The retrieval using this model is done by matching the CLIP image token to the \\\\~50,000 precomputed CAD models projected into CLIP space using OpenShape, which can be understood as zero-shot classification. However, in Table 2 we show that for our task, the image-based retrieval outperforms the OpenShape baseline. We assume that this is caused by the fact that OpenShape is trained with synthetic renderings, which causes a large domain gap between the training data and real in-the-wild images of objects.\"}",
"{\"summary\": \"The paper introduces a pipeline for extracting the 6D pose trajectory from an internet video without the need of the CAD for the specific object. The authors leverage vision features to retrieve the most similar CAD model of the object, then do per-frame alignment leveraging the same vision features of the original image and rendered from the CAD. They further estimate the rough object size using LLM and leverage 2D tracking models to get inter-frame rotation consistency. The authors conduct experiments and demonstrate their superior performance. They also show demos that their trajectory can be retargeted to guide the movement of the robot.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The task of predicting the 6D pose of internet videos without additional prior is important for a lot of downstream tasks.\\n2. The whole pipeline is reasonable, fetch the similar CAD model and do rough alignment. Then further leverage the 2D tracking results to get the smoothed trajectories, that are more motion-consistent across time.\\n3. The experiments on the retargeted motion on robotics further show the usefulness of the extracted smoothed trajectories.\", \"weaknesses\": \"1. The authors demonstrate that compared to model-based methods, whose performances suffer from the inaccurate CAD mode, their method addresses the challenge. However, there is lack of experiments compared to SOTA model-based methods with their fetched CAD models (e.g. FoundationPose with their retrieved CAD model).\\n2. In the 6D pose alignment part, the method applies a sapling-based trajectory to get the rotation, which potentially limits the accuracy of the rotation. In the results figure, there are some rotation errors, not sure if due to the sampling-based strategy or the DINO feature extractor.\\n3. For the robotics demo, the end-effector position control is on 6D pose or only on the rotation? From the Figure 9, the translation of the end-effector seems not consistent with the original video and in the simulator\", \"questions\": \"1. Why in Figure 1 and Figure 2, the same image has two different retrieved CAD models?\\n2. Can you provide the results of the error based on the quality of the retrieved CAD model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"The paper pays attention on 6D pose trajectory estimation of a manipulated object from an Internet instructional video with a novel framework. The framework first predicts the 6D pose of any object by CAD model retrieval. Then the smooth 6D object trajectories are extracted and retargeted via trajectory optimization into a robotic manipulator. Experiments on YCB-V and HOPE-Video datasets demonstrate the improvements over RGB 6D pose methods. Moreover, the 6D object motion can be transferred to a 7-axis robotic manipulator.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1 The pose estimation method by retrieving a CAD model, aligning the retrieved CAD model with the object, and grounding the object scale with respect to the scene.\\n\\n2 Consistent 6D pose trajectory estimation from Internet videos and retargeting trajectories to a robotic manipulator.\\n\\n3 The pose estimation improvement on YCB-V and HOPEVideo datasets, and transfer from 6D object motion to a 7-axis robotic manipulator.\", \"weaknesses\": \"1 The original contributions should be expressed more clearly. In the proposed method, various existing methods are employed. It is suggested to clearly distinguish the original contributions in this paper and usage of other methods. Specifically, the first contribution locates in the pose estimation method by retrieving a CAD model, aligning the retrieved CAD model, and grounding the object scale with respect to the scene. The subsequent question is that what is the original contribution, the whole pipeline or the detailed design of a particular module? The authors are suggested to express this more clearly in the revised version. For the second and third contributions, it is also recommended to present more clear expressions.\\n\\n2 For robotic manipulation, the running time of the pose estimation method is a key factor. The proposed method in the paper is somewhat time-consuming with 2s for detector, retrieval and scale estimation per scene and 0.2s for pose estimation per object. To further improve the paper, two suggestions are given. For one thing, the comparaions with other methods on running time are suggested to add. For another, more analysis about the running time is also preferred, such as the recommendations for accelerate the whole method.\", \"questions\": \"1 With the similar CAD model retrieval, the classification can also be obtained. I wonder if it is possible to use the CAD model to perform classification directly?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1BlEVFmqwn | $\text{O}_\text{2}$VIS: Occupancy-aware Object Association for Temporally Consistent Video Instance Segmentation | [
"Seunghun Lee",
"Jiwan Seo",
"Minwoo Choi",
"Kiljoon Han",
"Jaehoon Jeong",
"Ehsan Adeli",
"Sang Hyun Park",
"Sunghoon Im"
] | In this paper, we present Occupancy-aware Object Association for Video Instance Segmentation ($\text{O}_{\text{2}}$VIS), a new framework crafted to improve long-term consistency in instance tracking. We introduce the Instance Occupancy Memory (IOM) that tracks global instance features and their occupancy status to effectively differentiate between recurring and new objects. It ensures consistent tracking and effective management of object identities across frames, enhancing the overall performance and reliability of the VIS process. Moreover, we propose a Decoupled Object Association (DOA) strategy that handles existing and newly appeared objects separately to optimally assign indices based on occupancy. This technique enhances the accuracy of object matching and ensures stable and consistent object alignment across frames, especially useful in dynamic settings where objects frequently appear and disappear. Extensive testing and an ablation study confirm the superiority of our method over traditional methods, establishing new standards in the VIS domain. | [
"Video instance segmentation",
"Long-term memory",
"Temprorally consistent learning"
] | https://openreview.net/pdf?id=1BlEVFmqwn | https://openreview.net/forum?id=1BlEVFmqwn | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"reGeTCmbJv",
"pKgSjnSMDq",
"bfSU86H4e3",
"Rqb28rCcH7",
"GwkoYBTK37",
"6F4dIVkJJw"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730584850253,
1730650238659,
1730642339262,
1730664501271,
1731480639651,
1730713517988
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission638/Reviewer_s7AP"
],
[
"ICLR.cc/2025/Conference/Submission638/Reviewer_nofb"
],
[
"ICLR.cc/2025/Conference/Submission638/Reviewer_reMk"
],
[
"ICLR.cc/2025/Conference/Submission638/Reviewer_kMha"
],
[
"ICLR.cc/2025/Conference/Submission638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission638/Reviewer_yELW"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces an occupancy memory and a decoupled object association to track global features of objects long term and to ensure consistent matching of new and old objects. The proposed method achieves good performance on the VIS task.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and easy to follow. A thorough experimental analysis has been performed.\", \"weaknesses\": \"1. Limited technical novelty: The paper proposes some techniques to improve VIS, but all of these techniques have been seen in other tracking/VIS works in some form or the other. For example, the IOM is similar to global track queries in trackformer [2], Hungarian matching to align current objects to a global memory in DOA has been explored before in many works.\\n2. Incremental improvement: The results in table 1 and 2 often show a minimal improvement as compared to prior works. For example, on the youtube vis 2019 dataset, the method only gets a 0.2 points improvement over DVIS++ using the R50 backbone. Similar trend is observed for other datasets and other backbones. These improvements could often just come from randomness during training, so it would be nice if the authors could put error bars in the tables to demonstrate consistency. \\n3. Some prior works (e.g., CAROQ [1]) use query-based propagation for global tracking. How does the proposed method compare with such a method in terms of the number of parameters involved in tracking and the tracking speed? The proposed method requires 2 networks for tracking, as opposed to 1 network in most prior works, so some comparison table on the average time taken and the parameters involved solely for tracking would also be insightful.\\n4. There are some typos in the paper, e.g., a capitalized letter mid-sentence in line 47.\\n\\n\\n[1] Choudhuri et al., Context-Aware Relative Object Queries to Unify Video Instance and Panoptic Segmentation, CVPR 2023\\n[2] TrackFormer: Multi-Object Tracking with Transformers, Meinhardt et al., CVPR 2022\", \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces O2VIS, a novel framework for video instance segmentation that enhances long-term consistency in object tracking. The framework incorporates an Instance Occupancy Memory (IOM) module and a Decoupled Object Association (DOA) strategy, effectively distinguishing between new and recurring objects across frames. By decoupling the association of existing and newly appeared objects, the method maintains stable and consistent object identities throughout videos. Experimental results demonstrate that O2VIS achieves state-of-the-art AP scores on the YouTube-VIS 2019, 2021, and 2022 datasets, setting a new benchmark for the VIS task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Innovation: The study proposes an instance occupancy memory mechanism that addresses challenges in maintaining consistency when objects disappear and reappear, making it well-suited for complex, dynamic video scenes.\", \"performance\": \"The experimental results show that O2VIS significantly outperforms current state-of-the-art methods across multiple datasets, especially in AP scores.\", \"decoupled_strategy\": \"By implementing a decoupled association strategy for handling existing and new objects separately, the method avoids common background misalignment issues, enhancing tracking accuracy.\", \"comprehensive_experiments\": \"The paper provides thorough experimental comparisons with existing VIS methods, and ablation studies validate the effectiveness of each technical component, demonstrating the contributions of IOM and DOA modules.\", \"weaknesses\": \"Insufficient Details: While the paper introduces the Occupancy-guided Hungarian Matching and Decoupled Object Association strategies, implementation details are limited. Providing pseudocode or more concrete algorithmic descriptions could enhance clarity.\", \"computational_cost\": \"The addition of IOM and DOA likely increases computational complexity, particularly due to multi-frame memory updates and association matching. It would be beneficial to quantify the computational overhead of these modules within the paper.\", \"generalizability\": \"Experiments are currently focused on standard datasets like YouTube-VIS. The model\\u2019s performance in more challenging scenarios, such as high occlusion or rapid object movement, remains unclear.\", \"model_complexity\": \"With the integration of multiple modules, the overall model structure is complex, which may pose deployment challenges. Future work could explore simplifying the model or improving its efficiency.\", \"questions\": \"see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces O2VIS, a novel framework designed to enhance long-term consistency in video instance segmentation. The work presents two main technical innovations: an Instance Occupancy Memory (IOM) for tracking global instance features and their occupancy status, and a Decoupled Object Association (DOA) strategy that separately handles existing and new objects. The framework demonstrates state-of-the-art performance on YouTube-VIS benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1 The paper's technical contributions are both novel and well-executed. The IOM mechanism provides an elegant solution to the challenging problem of maintaining object identity consistency, while the decoupled association strategy effectively addresses the issue of new object appearances.\\n\\n2 The comprehensive experimental evaluation, including extensive ablation studies, convincingly demonstrates the effectiveness of each component. The strong performance across multiple benchmarks further validates the proposed approach.\", \"weaknesses\": \"1 The paper does not provide comparisons of model parameters and inference speed with existing methods, making it difficult to assess the practical implications of implementing this approach.\\n\\n2 There is no discussion of memory consumption or runtime benchmarks, which are crucial considerations for real-world applications. \\n\\n3 Some technical details, particularly regarding the IOM update mechanism and the interaction between TE and TA trackers, could be explained more thoroughly.\", \"questions\": \"1 The authors should include a comprehensive comparison of computational resources, including model parameters, inference speed, and memory usage, with existing methods. This would provide crucial context for understanding the practical trade-offs of their approach.\\n\\n2 Additionally, including more detailed pseudo-code for key algorithms and visualizations of memory usage patterns would enhance the technical clarity of the paper. \\n\\n3 Finally, an analysis of failure cases and performance on longer video sequences would provide valuable insights into the method's limitations and potential areas for improvement.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper presents a method for long-term tracking consistency in videos for the task of video instance segmentation. The core idea is that using the visibility or occupancy of the objects can help in associating their features correctly so as to differentiate between new and previously seen objects. Treating these two kinds of objects differently by associating them separately also helps. Experiments exist that compare state-of-the-art approaches to the proposed method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The motivation is well-grounded that the objects should be treated differently based on if they have been seen before\"], \"weaknesses\": [\"One of the most important baselines is missing for this task is SAM2 for video instance segmentation\", \"While the motivation is good, similar ideas have appeared before for tracking, for instance in DeepSORT or Detecting Invisible People. These are not segmentation approaches but should be cited to acknowledge that both of the main contributions of this paper have appeared before.\", \"It seems like the ID switches metric from multi-object tracking based on bounding boxes, is what the paper wanted to improve but there is no comparison to prior approaches with that metric so it is hard to tell if their claim of long-term consistency is valid over an entire dataset.\"], \"questions\": [\"It is very hard to understand Figure 1. There are barely any labels and barely any text in the caption to explain what each of the icons in the figure means. The first teaser figure should be very easy to understand and should convey an overall takeaway from the method, and not describe the method itself.\", \"Can you explain how you get an object's occupancy O near L206?\", \"If an object's occupancy is 0, why should it's new feature representation be added to the memory?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper presents the O2VIS framework, aiming to improve long-term consistency in video instance segmentation. By introducing Instance Occupancy Memory (IOM) and Decoupled Object Association (DOA), this method enhances the stability of object tracking in dynamic video scenes, effectively differentiating between recurring and new objects. The paper demonstrates the approach's performance on multiple benchmark datasets, such as YouTube-VIS and OVIS, highlighting its advantages in accuracy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Application-oriented Innovation: O2VIS introduces occupancy information into the memory update process, making it possible to maintain object identity consistency more accurately in scenes where objects frequently appear and disappear. This occupancy-aware memory management strategy provides a useful enhancement for video instance segmentation.\\n\\n2. Empirical Support: The experimental results on various benchmark datasets show improved average precision (AP) and average recall (AR), supporting the method's effectiveness. Additionally, the ablation studies validate the contributions of IOM and DOA, strengthening the reliability of the results.\\n\\n3. Well-designed Component Structure: The decoupled approach in DOA separately manages existing and new objects, using occupancy-guided Hungarian matching to reduce incorrect associations. This is a practical and effective design choice.\", \"weaknesses\": \"1. Limited Theoretical Innovation: The calculation of foreground probability is essentially a weighted adjustment using output probabilities and does not introduce a novel computational framework or algorithm. IOM and DOA represent more of an applied enhancement of existing memory and association techniques, rather than a fundamental theoretical breakthrough, which may limit the impact at conferences focused on theoretical innovation.\\n\\n2. Unclear Generalizability: The method is primarily designed for video instance segmentation, and its applicability to other tasks, such as multi-object tracking, has not been demonstrated. Verifying IOM and DOA\\u2019s effectiveness in other visual tasks would strengthen the paper\\u2019s generalizability.\\n\\n3. Dependence on Pre-trained Model Accuracy: Since the foreground probability relies on classification outputs, errors in these outputs could lead to incorrect memory updates, potentially destabilizing tracking performance. This dependency might reduce overall system stability, particularly when applied to longer or more complex video sequences.\", \"questions\": \"Is the effectiveness of IOM and DOA useful in other tasks such as multiple object tracking?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
1BdPHbuimc | Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models | [
"Zhenyu Pan",
"Haozheng Luo",
"Manling Li",
"Han Liu"
] | We present a Chain-of-Action (CoA) framework for multimodal and retrieval-augmented Question-Answering (QA). Compared to the literature, CoA overcomes two major challenges of current QA applications: (i) unfaithful hallucination that is inconsistent with real-time or domain facts and (ii) weak reasoning performance over compositional information. Our key contribution is a novel reasoning-retrieval mechanism that decomposes a complex question into a reasoning chain via systematic prompting and pre-designed actions. Methodologically, we propose three types of domain-adaptable `Plug-and-Play' actions for retrieving real-time information from heterogeneous sources. We also propose a multi-reference faith score to verify conflicts in the answers.
In addition, our system demonstrates that detecting the knowledge boundaries of LLMs can significantly reduce both LLM interaction frequency and tokens usage in QA tasks. Empirically, we exploit both public benchmarks and a Web3 case study to demonstrate the capability of CoA over other methods. | [
"large language model",
"question answering",
"chain-of-thought"
] | Accept (Poster) | https://openreview.net/pdf?id=1BdPHbuimc | https://openreview.net/forum?id=1BdPHbuimc | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zVVajcOjX9",
"vJRhpfKiEC",
"qpPm29YkkU",
"ngx6qlxoh3",
"l1DktOVSQP",
"kb6Mh6GSso",
"iLkQc4ygKa",
"ftJBv4vnva",
"b1zpyTBiPC",
"YxgLKGEVmo",
"WeYo6UHpmu",
"UkcYjebKcx",
"TRtWDFAzHg",
"TB3v4VhieU",
"R2ZkZBjh5d",
"OxQnM9gSuT",
"LXxlR3FcYk",
"ISAdYFrbBb",
"HQ9xdVhSSH",
"HPP4AbMuTk",
"FGCyl8expf",
"DtP1fo93Tj",
"8GPw6TbPlV",
"6HbAS1PEI9"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1732976349215,
1732089430772,
1737523709145,
1732318430448,
1732092103252,
1732092071694,
1734887772759,
1732148034601,
1732088141103,
1730376572009,
1732091966728,
1732088200150,
1732088015792,
1732089302141,
1733214058205,
1732148054463,
1732742802015,
1732318481865,
1732318518923,
1733211254706,
1732147976636,
1732954961098,
1730669974822,
1730674239610
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Area_Chair_QTw5"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Reviewer_VgSf"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Reviewer_wYMh"
],
[
"ICLR.cc/2025/Conference/Submission5480/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5480/Reviewer_mjpZ"
],
[
"ICLR.cc/2025/Conference/Submission5480/Reviewer_wYMh"
],
[
"ICLR.cc/2025/Conference/Submission5480/Reviewer_mjpZ"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for your thoughtful feedback and for taking the time to review our revised manuscript. We greatly appreciate your recognition of our efforts to address your concerns, and we are delighted that the revisions have met your expectations. Your support and encouragement mean a lot to us as we continue to refine our work. Thank you once again for your valuable input!\\n\\nAuthors\"}",
"{\"title\": \"Reply to Question 1, 2\", \"comment\": \"---\", \"questions_1\": \">Can you elaborate on the key differences between \\\"thoughts\\\" in CoT and \\\"actions\\\" in CoA? How does this change improve the overall performance? It would also be helpful if you can discuss the limitations and trade-offs between them.\", \"answer_1\": \"Thank you for your thoughtful question. The key difference between \\\"thoughts\\\" in CoT and \\\"actions\\\" in CoA lies in their approach to handling complex problems and their reliance on LLM capabilities. The core of CoT is to decompose a complex problem into sequential steps (thoughts) that a transformer-based model can solve within its theoretical capacity. Each \\\"thought\\\" represents one step, and the process of generating them is constrained entirely by the LLM's inherent abilities. This approach relies solely on the LLM's parametric knowledge to progressively handle each step until arriving at a final answer. In contrast, CoA extends this framework by introducing external tools that the LLM can utilize. CoA enables the model to consider the availability of retrieval tools when decomposing a complex problem. Instead of being limited to internal reasoning, CoA first addresses sub-questions that fall within the LLM's knowledge boundaries and delegates sub-questions beyond its scope to specific external tools. Each \\\"action\\\" in CoA is a structured unit comprising the decomposed sub-question, the assigned tool, and a flag indicating whether external help is required. This transition from \\\"thoughts\\\" to \\\"actions\\\" introduces a more nuanced and multidimensional analysis process compared to the straightforward decomposition in CoT. By integrating different dimensions of reasoning and external knowledge retrieval, CoA enhances the ability to address real-world scenarios that often require more context and dynamic information than CoT can provide.\", \"improvement_in_performance\": \"This shift improves overall performance in scenarios where LLMs alone are insufficient, such as those requiring real-time information or highly specialized external knowledge. By leveraging tools dynamically, CoA reduces the dependency on LLMs to guess or approximate answers, resulting in higher accuracy and richer, more contextually grounded responses.\", \"limitations_and_trade_offs\": \"\", \"limitations_of_coa\": \"\", \"latency\": \"Incorporating external tools adds latency, as retrieval and processing require additional time.\", \"tool_dependency\": \"The performance depends on the quality and reliability of the external tools.\", \"implementation_complexity\": \"Designing actions and ensuring smooth integration with tools can be more complex than CoT.\", \"trade_offs_compared_to_cot\": \"Simplicity vs. Capability: CoT is simpler and faster as it relies only on the LLM but is limited by its parametric knowledge. CoA sacrifices simplicity for enhanced capability by integrating external tools.\", \"scalability\": \"While CoT scales efficiently within the LLM, CoA requires careful scaling and optimization of the retrieval process for practical applications.\\nWe hope this clarifies the distinctions and trade-offs between CoT and CoA and how the latter improves performance while addressing real-world challenges. Thank you again for your insightful question!\\n\\n---\", \"question_2\": \">If the system doesn't have the ability to add additional actions like web query, does CoA still perform better than CoT.\", \"answer_2\": \"Thank you for the excellent question. Yes, even without additional actions like web query, CoA still performs better than CoT. **In Table 2, we conducted an initial evaluation by comparing the CoA without Retrieval version against baselines that rely solely on the LLM's inherent capabilities**. The results show that CoA still outperforms all baselines. To further investigate the reasons behind this, we selected a representative case study and provided a **detailed analysis in Appendix Section E**. This comparison highlights that CoA offers a richer analysis by integrating multiple aspects of the scenario into a comprehensive reasoning chain. Unlike CoT, which often produces straightforward, surface-level interpretations, CoA contextualizes responses within broader social and procedural frameworks, addressing both the direct question and its underlying implications. This insight demonstrates how CoA provides deeper, more contextually enriched answers compared to CoT, making it particularly effective for complex scenarios requiring nuanced understanding. We hope this addresses your question thoroughly, and we appreciate your thoughtful input!\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Follow-Up on Rebuttal Phase Feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you once again for reviewing our paper and providing such valuable and constructive feedback. We have carefully studied your suggestions and made several revisions, adding extensive experimental details to enhance the paper\\u2019s clarity, depth, and contributions. Your insightful comments have been instrumental in guiding these improvements.\\n\\nWe sincerely hope that if you have any further questions or concerns, you will not hesitate to let us know. We are more than willing to provide additional clarifications and supporting materials to ensure the paper meets the highest standard. Your insights are crucial for refining our research and ensuring its relevance and impact.\\n\\nAdditionally, we kindly hope that these updates and clarifications will encourage you to reconsider your evaluation, as they directly address your constructive feedback. Should you have any additional queries or reservations, please feel free to contact us at any time. We are fully committed to addressing all concerns to your satisfaction.\\n\\nThank you once again for your invaluable time and support.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Reply to Question 4:\", \"comment\": \"---\", \"question_4\": \"> Could the use of CoA extend to tasks requiring intricate reasoning paths that involve recursive or nested logic?\", \"answer_4\": \"Thank you for the insightful question! Yes, our current CoA framework is designed to leverage the LLM's parametric knowledge and our MRFS verification mechanism to significantly reduce redundant external processing, costs, and latency. While CoA already demonstrates strong performance on complex tasks, such as multi-turn open-domain datasets like QReCC and long-form datasets like ASQA, we have also devised a solution for handling even more intricate reasoning paths: an iterative generation mechanism.\\n\\nThis approach involves generating an initial action chain and iteratively refining it. If any sub-node requires external retrieval for correction or imputation, the completed chain is fed back as input to regenerate an updated chain. This process continues for up to 10 iterations or until no sub-node requires retrieval, dynamically adapting the reasoning path based on the latest information. By doing so, we can minimize the generation of irrelevant sub-questions that may occur in a single-generation process. However, this iterative approach requires additional processing time, which is why in our current paper, we limit CoA to a single generation for efficiency. Our findings indicate that even with this limitation, CoA's performance is satisfactory for most tasks. **Additionally, we have included this iterative mechanism in the appendix of the revised version as a potential direction for future work.** In the future, we aim to explore how to achieve optimal performance with minimal time overhead, potentially making iterative refinements more practical for more complicated applications. Thank you again for your thoughtful feedback!\"}",
"{\"title\": \"Reply to Question 2, 3\", \"comment\": \"---\", \"question_2\": \"> How does CoA handle discrepancies or conflicts when sources provide contradictory information?\", \"answer_2\": \"Thank you for this excellent question! Handling discrepancies or conflicts when sources provide contradictory information is indeed a critical challenge. In our current framework, when performing web searches, we retrieve the top *k* most similar results and pass them all to the LLM, prompting it to select the most reliable answer. This approach has the advantage of saving time, as it allows us to obtain a single result in one step. However, it can introduce instability due to the LLM's reliance on potentially conflicting sources.\\n\\nWe have also considered an alternative approach where each page is summarized individually into potential answers, followed by a voting mechanism to identify the most common or consistent information. While this method can improve accuracy by leveraging consensus, it requires multiple LLM calls, significantly increasing latency. As such, we have made a trade-off between time and accuracy in our current design.\\n\\nLooking ahead, we plan to explore faster and more effective methods to resolve conflicts among external data sources. For example, assigning weights to different sources based on their credibility or using additional retrieval rounds for fact-checking could help improve both reliability and efficiency. These directions will further enhance CoA's ability to handle contradictory information in a robust and timely manner. Thank you again for highlighting this important aspect!\\n\\n---\", \"question_3\": \"> Are there plans to explore CoA's performance in real-time, fast-evolving information retrieval scenarios where data may change rapidly (e.g., live news events)?\", \"answer_3\": \"Thank you for the excellent question! Yes, as highlighted in our paper, the Web3 QA use case is a prime example of a real-time, fast-evolving scenario. The Web3 field is rapidly growing, with new products and concepts emerging daily, often tied to specific cryptocurrencies. In our system, investment advice is the most frequently queried category, accounting for approximately 68% of total queries. To address these queries effectively, we need to quickly identify the mentioned products and retrieve relevant information via web search and market data for the LLM to process. User feedback suggests that the current results are satisfactory for most users. However, investment advice inherently involves subjective factors and unpredictability, so users typically treat the LLM\\u2019s suggestions as references rather than absolute decisions. In high-frequency real-world applications, we optimize the retrieval process through engineering efforts like parallelization and fuzzy search to reduce latency. For fairness in our experiments, we used a standardized retriever across baselines. However, in practical scenarios, each task\\u2014such as vector database management, vector search, or web search\\u2014has dedicated teams working to improve their performance. This means that CoA can further benefit from advancements in any of these areas, leading to a synergistic evolution. **Additionally, we have included experiments in Table 3 of revised version showing the LLM\\u2019s input and output token usage, as well as the overall average latency.** These results demonstrate that CoA outperforms other RAG-based frameworks by leveraging the LLM\\u2019s parametric knowledge to significantly reduce overall costs and latency. Looking forward, we plan to construct a real-time benchmark using back-testing in the Web3 investment market. This benchmark will enable the community to evaluate performance in fast-evolving scenarios and further validate the effectiveness of approaches like CoA. **We have added it in the Appendix as our future work.** Thank you again for your thoughtful question!\"}",
"{\"metareview\": \"This paper introduces the Chain-of-Action (CoA) framework, a new approach to improving multimodal and retrieval-augmented QA for LLMs. The framework tackles two core challenges in QA\\u2014unfaithful reasoning and information hallucination\\u2014by decomposing complex questions into sequential reasoning steps termed \\\"actions.\\\" These actions include web querying, knowledge encoding, and data analysis, enabling systematic integration of heterogeneous data sources. Additionally, the authors propose the multi-reference faith score (MRFS), which cross-validates model outputs against multiple sources to improve reliability. Empirical results demonstrate CoA's effectiveness, showing improved performance across public QA benchmarks. The framework claims advantages in cost-efficiency (via reduced API calls) and flexibility, allowing integration of diverse data types.\", \"strenghts\": [\"CoA\\u2019s structured decomposition into sub-questions/actions represents a significant advancement in tackling complex QA tasks. Its modular design provides adaptability for various data types and retrieval methods.\", \"The proposed framework consistently outperforms baseline methods (e.g., CoT, DSP, SearchChain) across multiple QA benchmarks, demonstrating its robustness and versatility.\", \"The introduction of MRFS as a reliability metric enhances the trustworthiness of responses, mitigating common issues like hallucination and conflicting information.\", \"CoA demonstrates efficiency in reducing token usage and API calls. This provides clear advantages for scalability in cost-sensitive deployments.\"], \"weaknesses\": \"\", \"the_initial_weaknesses_of_the_paper_raised_by_reviewers_include_the_following\": [\"Terminology used in the paper, such as \\\"multimodal,\\\" \\\"plug-and-play,\\\" and \\\"chain-of-action,\\\" are ambiguously defined or potentially misapplied (e.g., text and tabular data being labeled as \\\"multimodal\\\").\", \"The paper does not include sufficient ablation studies to isolate the contributions of individual components, such as the impact of MRFS or the use of tabular data in performance gains.\", \"While CoA reduces token usage, its modularity and additional retrieval steps may introduce significant latency, especially with complex or real-time data sources. The study lacks an analysis of response times and scalability for dynamic scenarios.\", \"The framework's applicability to non-textual or highly unstructured data, such as visual or live-stream data, remains unexplored, which limits its generalizability.\", \"During the rebuttal stage, the authors revised the manuscript and addressed the majority of these weaknesses.\", \"Overall, this paper makes a meaningful contribution in improving multimodal and retrieval-augmented QA for LLMs. The paper received strong ratings with at least two reviewers excited about the paper, and no reviewer opposing acceptance. After evaluating the discussions and the revision, to me it appears that the authors have adequately addressed the major concerns.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewer mjpZ acknowledged that the author response effectively addressed all their questions and concerns, which led to an increase in their evaluation scores. Although reviewer VgSf did not provide feedback on the author's responses, the authors have revised the manuscript to address each of the reviewer VgSf' questions and concerns.\"}",
"{\"title\": \"General Rebuttal/Revision Response - Continued 1\", \"comment\": [\"We expanded the details on the expert evaluation process to enhance transparency and credibility. The experts were selected based on a rigorous survey of Web3 practitioners, and the process is now described in Appendix Section F. This includes criteria, qualifications, and the questions used during evaluation. These additions ensure readers understand the reliability of the evaluation methodology.\", \"We clarified the definition of \\\"imputation\\\" in Table 2, referring to filling unanswered sub-questions with retrieved information. Additionally, we renamed CoA-MRFS as CoA(MRFS in verification) to clearly indicate its scope in verification. We also clarify the difference between ROUGE in Table 2 and ROUGE-L used in ASQA long-form QA dataset. These adjustments address confusion around terminology and results in Table 2.\", \"We addressed the reviewer\\u2019s suggestion to include separate statistics for input and output tokens and even the latency in Table 3. The revised table now provides detailed statistics, including average token usage per action and corresponding time costs. These updates provide a comprehensive understanding of cost details, aligning with the reviewer's request.\", \"We clarified \\\"knowledge boundary\\\" to refer to the parametric knowledge the model has learned during training. Leveraging this boundary reduces unnecessary retrieval efforts in RAG-based frameworks, optimizing token usage and LLM interactions. This explanation is now included in the introduction for clarity.\", \"We ensured comparability between CoA and previous studies by using the same model (gpt-3.5-turbo) across all baselines for answer generation. GPT-4 was only used for evaluation purposes, as clarified in Section 3.1. This ensures a fair and consistent experimental setup.\", \"We updated the section titles in 2.2.1 and 2.2.2 to \\\"Data Collection\\\" and \\\"Data Verification\\\" to align with their content. This change simplifies comprehension and better reflects the focus of the corresponding sections.\", \"---\", \"**Reviewer wYMh**\", \"We addressed the concern about the potential latency introduced by CoA by adding the average response time and token consumption details for all baselines in Table 3. These updates show that CoA reduces unnecessary retrievals by leveraging parametric knowledge and employing MRFS for verification, while engineering optimizations, like parallel processing, further minimized response time for real-time applications.\", \"We conducted an ablation study to evaluate the impact of removing specific actions, such as web search and local knowledge base search, on performance. The results, added to Table 2, demonstrate that web search contributes more significantly to performance improvements, particularly for non-factual and open-ended QA scenarios, providing clarity on the importance of each action type.\", \"We compared CoA with and without external \\\"plugs\\\" across different question types, showing that CoA significantly improves performance in complex scenarios like cross-domain transitions and long-form answers. In contrast, the improvements are modest for simpler, commonsense questions, highlighting CoA's adaptability and effectiveness in handling diverse challenges.\", \"We clarified the key differences between \\\"thoughts\\\" in CoT and \\\"actions\\\" in CoA, emphasizing that CoT relies solely on internal reasoning while CoA integrates external tools for complex tasks. This distinction enables CoA to address real-world scenarios with richer, more dynamic reasoning but introduces additional latency and complexity, balanced against its superior capability for nuanced problems.\", \"We demonstrated that CoA outperforms CoT even without external actions, as shown in Table 2, by producing more contextualized and comprehensive responses. A detailed case study in Appendix Section D further illustrates CoA\\u2019s ability to address nuanced questions effectively, even within the limits of an LLM\\u2019s parametric knowledge.\", \"We addressed the concern about latency by showing that CoA reduces overall latency, LLM calls, and token costs compared to RAG-based baselines, as reported in Table 3. These optimizations, combined with MRFS verification and knowledge boundary detection, ensure CoA remains efficient and practical for real-time applications while addressing complex information needs.\", \"**Reviewer VgSf**\", \"We addressed the concern regarding the potential extension of CoA to diverse and unstructured data modalities by outlining these possibilities in response to Question 1. This clarification highlights CoA's adaptability and its feasibility for tasks involving broader data types.\", \"We addressed the scalability and efficiency challenges of integrating complex or real-time data sources in response to Question 3. This includes strategies for handling rapidly evolving information to ensure CoA remains efficient and effective in dynamic scenarios.\"]}",
"{\"title\": \"Reply to Weakness 6, 7 and Question 1\", \"comment\": \"Weakness 6\\n\\n> The study does not include ablations to show the specific contribution of tabular data. Providing such analyses could clarify its impact on the framework's performance.\", \"answer_6\": \"Thank you for your thoughtful suggestion. We agree that it is important to provide a comparison to clarify the specific contribution of tabular data. As described in Section 2.2.1 - Action 3, our tabular data action is used exclusively in the Web3 case to retrieve market data. **Based on your feedback, we revisited our experiments and consulted the Web3 experts we worked with earlier to evaluate the performance of the framework in scenarios where tabular data is not available for support**. **The results of this analysis have been included in the revised version, specifically in Table 6, as part of an ablation study.** From the updated results, we observe a significant decline in the coverage and overall quality metrics without the inclusion of real-time market prices, highlighting the lack of truly useful references. However, the non-redundancy and readability metrics show negligible differences. We hope this additional analysis provides clarity and addresses your concern.\\n\\n---\", \"weakness_7\": \"> Section 3.2 mentions expert evaluation on a 1 to 3 scale based on three criteribut it lacks details on the expert recruitment process, qualifications, and any inter-rater reliability metrics. Adding these details would increase the transparency and credibility of the evaluation process.\", \"answer_7\": \"Thank you for pointing out the need for additional details regarding the expert evaluation process. We appreciate the opportunity to provide further clarity. Our expert evaluators were selected from a pool of professionals actively working in the Web3 domain. During the initial stages of product development, we conducted a targeted survey distributed to well-known Web3 practitioners. The survey included 20 questions, with 10 focusing on foundational concepts in Web3 and the remaining 10 being open-ended questions designed to assess their understanding and vision of the Web3 field. Responses were scored, with three senior Web3 investors evaluating the open-ended answers based on their expertise and perspective. From this process, we selected the top 20 candidates with the highest overall scores to serve as our evaluation experts. As an incentive and to ensure continued engagement, these experts were granted free early-stage access to the product. This rigorous selection process was designed to ensure that the evaluators possessed both technical expertise and a nuanced understanding of the Web3 domain. Additionally, **we have included a detailed description of this process and 20 questions in the revised version's Appendix Section F**, as we believe this addition strengthens the reliability and transparency of the paper. Thank you again for your thoughtful feedback.\\n\\n---\", \"question_1\": \"> Could you clarify what \\u201cimputation\\u201d refers to in Table 2? Are there results available for CoA without MRFS, and what does \\u201cw/ ROUGE\\u201d mean? My understanding was that ROUGE is used only in ASQA.\", \"answer_1\": \"1.1. Clarify \\u201cimputation\\u201d:\\n\\nThe term \\\"imputation\\\" in Table 2 refers to the process when llm encounters a sub-question that it cannot answer (and leaves the initial answer blank), we utilize retrieved information to provide an answer for that sub-question. **We have clarified the meaning in the Section 3.1-Baselines of the revised version.**\\n\\n1.2. Results of CoA without MRFS\\n\\nWe are sorry to confuse you. The MRFS only exists in the verification part. So, the CoA without MRFS is equal to the CoA without verification, whose results are already listed in Table 2. **We have changed the name of CoA-MRFS to CoA(MRFS in verification) in Table 2 of the revised version.**\\n\\n1.3. Meaning of \\u201cw/ROUGE\\u201d\\n\\nWe apologize for any confusion regarding this term. The ROUGE in Table 2 is different from the way ROUGE-L is used in the ASQA dataset. In our work, the verification metric MRFS (in Section 2.2.2) was inspired by the original ROUGE. Therefore, we included \\\"verification with ROUGE\\\" in Table 2 as a baseline for comparison against \\\"verification with MRFS.\\\" It highlights that our MRFS yields better verification than the original ROUGE. On the other hand, in the ASQA dataset, ROUGE-L is used specifically to evaluate long-form QA responses, which serves a different purpose than the ROUGE in Table 2. Notably, ROUGE-L focuses on the Longest Common Subsequence (LCS), making it particularly suitable for evaluating the fluency and coherence of long-form answers, which often have more diverse phrasing and structure compared to shorter responses. **We have clarified the difference between ROUGH and ROUGH-L in the ASQA dataset in Section 3.1-Metrics of the revised version.**\"}",
"{\"summary\": \"The paper presents the Chain-of-Action (CoA) framework designed to improve large language models' (LLMs) performance in multimodal and retrieval-augmented question-answering (QA). CoA addresses challenges such as hallucination and weak compositional reasoning by decomposing questions into reasoning chains. This method incorporates plug-and-play actions for retrieving diverse data sources and uses a novel multi-reference faith score for verification. Empirical results show CoA outperforms other methods in public benchmarks and real-world applications.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"**Innovative Framework**: The CoA's structured decomposition into sub-questions and its use of domain-adaptable plug-and-play actions represent a significant advancement in enhancing the faithfulness and accuracy of LLM responses.\", \"**Empirical Validation**: Demonstrated strong performance on benchmarks and real-world applications, notably outperforming existing baselines in multimodal QA tasks.\", \"**Verification Mechanism**: The multi-reference faith score is an effective metric for cross-validating LLM-generated answers against external sources, enhancing reliability.\", \"**Practical Impact**: Real-world implementation in a Web3 QA system showed increased user engagement and positive feedback, validating the method's applicability.\"], \"weaknesses\": [\"While the CoA approach shows strong empirical performance, its adaptability to more diverse or unstructured data modalities beyond text and tabular data remains to be proven.\", \"The scalability and efficiency when integrating more complex or real-time data sources require further exploration, especially in scenarios with rapidly changing information.\", \"The approach, despite its modular design, may face challenges in tasks involving higher-order reasoning or complex multi-step dependencies that are not purely fact-based.\"], \"questions\": \"1. Can the authors provide more details on how the CoA framework could be adapted for tasks involving visual or mixed data modalities?\\n2. How does the framework handle discrepancies or conflicts when sources provide contradictory information?\\n3. Are there plans to explore CoA's performance in real-time, fast-evolving information retrieval scenarios where data may change rapidly (e.g., live news events)?\\n4. Could the use of CoA extend to tasks requiring intricate reasoning paths that involve recursive or nested logic?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reply to Weakness 1, 2, 3 and Question 1\", \"comment\": \"Dear Reviewer VgSf:\\n\\nThank you for recognizing our work that our structured decomposition and plug-and-play actions enhance LLM response accuracy, with strong empirical validation on benchmarks and real-world Web3 QA applications. And here are our reply to your comments:\\n\\n---\", \"weakness_1\": \"> While the CoA approach shows strong empirical performance, its adaptability to more diverse modalities remains to be proven.\", \"answer_1\": [\"Thank you for your insightful question! The CoA framework currently supports text and tabular data modalities, with its feasibility validated through real-world product deployments. For visual modalities, we have already integrated capabilities in our Web3 QA use case. Specifically, we designed a new action module to process cryptocurrency market trend charts using Vision-Language Models (VLMs). These models extract insights related to sub-questions, such as trend fluctuations or patterns. By combining this visual information with K-line knowledge from local knowledge base, CoA generates more credible answers compared to relying on tabular data.\", \"Also, we experimented with adapting CoA for broader visual and mixed data tasks using advanced vision-language models like Qwen-VL. These models enable CoA to dynamically retrieve and analyze visual information alongside textual data. For instance, when analyzing a cryptocurrency market price trend chart and answering whether a Web3 product is a good investment, CoA retrieves relevant background information, recent news, and supporting data. These inputs are synthesized with Qwen-VL\\u2019s visual analysis to provide a comprehensive and well-informed response.\", \"In addition, we have some future plan:\", \"1. New Actions for Visual Data:\", \"**Visual-Querying Action:** Similar to web-querying, this action would involve retrieving visual data from sources (e.g., image databases or APIs) relevant to a sub-question. For instance, for an image-based question, the action could retrieve relevant images and pass them to a vision model like Qwen-VL for feature extraction.\", \"**Visual-Reasoning Action:** This action would utilize vision-language models such as Qwen-VL to answer sub-questions by interpreting visual inputs in combination with textual context. The retrieved information can then be integrated into the reasoning chain.\", \"**Multimodal-Analyzing Action:** For tasks requiring integration of visual and textual data (e.g., charts, annotated images, or multimedia documents), this action can process and align multimodal embeddings using models like Qwen-VL.\", \"2. Integration with Qwen-VL\", \"**Reasoning and Retrieval Pipeline:** Qwen-VL\\u2019s capability to handle image-text tasks can be seamlessly integrated into the CoA reasoning chain by invoking its API for visual sub-question processing. For example:\", \"If a sub-question asks for identifying objects in an image, the CoA framework can trigger a Qwen-VL-based action.\", \"The embeddings generated by Qwen-VL can serve as inputs to subsequent actions for cross-modal reasoning.\", \"**Multi-Reference Faith Score (MRFS) for Visual Tasks:** The MRFS metric can be extended to verify the alignment between retrieved visual data and LLM-generated responses, ensuring faithfulness in multimodal tasks.\", \"3. Future Vision:\", \"The modularity of CoA allows the addition of other advanced vision-language models, including task-specific fine-tuned variants, to further expand capabilities.\", \"Future work could explore fine-tuning the interaction protocols between textual reasoning and visual data interpretation.\", \"We thank the reviewer for their constructive proposal, and we believe incorporating Qwen-VL and other vision-language tools into CoA will significantly broaden its applicability and strengthen its contributions to multimodal and retrieval-augmented QA\", \"---\"], \"weakness_2\": \"> The scalability and efficiency when integrating more complex or real-time data sources require further exploration.\", \"answer_2\": \"Thank you for your comment! Please refer to our response to Question 3, where we discuss the integration of complex and real-time data sources, including strategies for handling rapidly changing information. We hope this addresses your concerns effectively!\\n\\n---\", \"weakness_3\": \"> The approach, may face challenges in tasks involving higher-order reasoning or complex multi-step dependencies.\", \"answer_3\": \"Thank you for your comment! Please refer to our response to Question 4, where we discuss how CoA addresses tasks requiring intricate reasoning paths and complex multi-step dependencies. We believe the explanation there will provide clarity and address your concerns effectively.\\n\\n---\", \"question_1\": \"> Can the authors provide more details on how the CoA be adapted for tasks involving visual or mixed data modalities?\"}",
"{\"title\": \"Reply to Question 2, 3, 4, 5\", \"comment\": \"---\", \"question_2\": \"> In Table 3, could you provide separate statistics for input and output tokens, as well as the average token usage per action? This would help readers better understand the specific cost details.\", \"answer_2\": \"Thank you for your valuable suggestion! **We have added the requested results to the revised version and updated Table 3 to include separate statistics for input and output tokens.** Additionally, for actions like web query and database search, we found that the average token usage ratio is approximately 7:3. **We have also included the corresponding average time cost to provide a more comprehensive understanding of the cost details.** We hope these updates address your concerns and improve the clarity of our paper. Thank you again for your helpful feedback!\\n\\n---\", \"question_3\": \"> Could you elaborate on what is meant by the term \\u201cknowledge boundary\\u201d?\", \"answer_3\": \"Thank you very much for your question. By \\\"knowledge boundary,\\\" we are referring to the parametric knowledge that the model has already acquired during training on a high-quality dataset. Given that LLMs with extensive parameters have undergone costly training, leveraging the comprehensive knowledge boundary that the LLM has already mastered can significantly reduce unnecessary retrieval efforts\\u2014particularly during the filtering and summarization phases after coarse retrieval\\u2014in a RAG-based framework. In other words, it helps decrease the LLM interaction frequency and token usage during the QA task. **We have clarified the meaning more clearly in the introduction section.**\\n\\n---\", \"question_4\": \"> Are the results of the Chain-of-Action framework directly comparable to previous studies? I noticed that this study used GPT-4, while DSP and SearchChain relied on older-generation LLMs (text-davinci-002 and gpt-3.5-turbo, respectively).\", \"answer_4\": \"Thank you very much for your question, and I apologize for any confusion. As mentioned in Section 3.1 under \\\"Implementation,\\\" the reference to GPT-4 in our paper only belongs to the evaluation process, where we use it to assess whether the answers generated by different baselines, including our proposed CoA framework, align with the ground truth.(see Appendix C for the evaluation prompt details). For the answer generation itself, all baselines, including our CoA, utilize the same backbone model, gpt-3.5-turbo. This ensures a fair and trustworthy comparison across all experiments presented in the study. We greatly appreciate your pointing this out, and **we have clarified this more clearly in the Implementation part as well**.\\n\\n---\", \"question_5\": \"> Would it be fair and perhaps clearer to rename Sections 2.2.1 and 2.2.2 as \\\"Data Collection\\\" and \\\"Data Verification,\\\" instead of \\u201cActions Design\\u201d and \\u201cActions Workflow\\u201d? These alternative terms seem easier to understand and align well with the content of the corresponding subsections.\", \"answer_5\": \"Thank you very much for your suggestion. After careful consideration, we agree that the content under \\\"Actions Workflow,\\\" including Answering Verification and Missing Detection, can indeed be summarized as \\\"Data Verification.\\\" Using \\\"Data Collection\\\" and \\\"Data Verification\\\" as the new titles would make these sections easier to understand. **We have updated the corresponding section titles accordingly.**\"}",
"{\"title\": \"Reply to Weakness 1,2,3,4,5\", \"comment\": \"Dear Reviewer mjpZ:\\n\\nWe sincerely thank the reviewer for recognizing our framework's contributions to improving precision, efficiency, and cost-saving potential, as well as highlighting the value of the multi-reference faith score (MRFS) in enhancing answer reliability. Your thoughtful feedback is greatly appreciated! And here are our reply to your comments:\\n\\n---\\n\\nWeakness 1, 2, 5:\\n\\n> Though the pipeline is straightforward, understanding the study's actual workflow is hindered by (1) inaccurate terminology, (2) loosely connected methodology descriptions, and (3) a mix of abstract workflows and technical details. Certain terms are uncommon or seem misapplied, which leads to confusion. For example, terms like \\\"multimodal\\\" (when referring to text and tabular data), \\\"chain-of-action\\\" (more of a \\\"chain-of-data-collection-and-verification\\\"), \\\"actions design\\\" (data collection), \\\"actions workflow\\\" (data verification), \\\"node\\\" (sub-question), and \\\"knowledge boundary\\\" (what a model actually knows) lack clarity and could benefit from more precise definitions or alternatives.\\n\\nAnswer 1, 2, 5:\\n\\nThank you for your valuable feedback. **In the revised version of our paper, we have carefully addressed the issues you highlighted regarding terminology and clarity.** Specifically, we have refined the definitions and replaced terms like \\\"multimodal\\\", \\\"actions design\\\", and \\\"actions workflow\\\" with more precise alternatives to reduce ambiguity and better align with their intended meanings. For example, we now describe the input as \\\"heterogeneous data\\\" instead of \\\"multimodal data,\\\". We also added a clear definition of knowledge boundary in the answer to Question 3. These changes aim to improve clarity and ensure the terminology is both accurate and intuitive for readers. We believe these updates will effectively address the concerns you raised and enhance the overall readability and precision of the paper.\\n\\n---\", \"weakness_3\": \"> Question decomposition appears critical to this framework, yet there is limited discussion on decomposition strategies or comparisons with existing baselines. Further elaboration here would strengthen the paper's contributions.\", \"answer_3\": \"Thank you for your insightful suggestion. We agree that further elaboration on decomposition strategies strengthens the paper. In Table 2, we compare CoA without actions against other decomposition baselines, emphasizing the differences in reasoning and decomposition methods. Additionally, Appendix D provides a case study comparing CoA and CoT to further clarify these distinctions. To validate the correctness and relevance of decomposed sub-queries, we evaluated 50 questions from the Social QA dataset, yielding 96% correctness and 98% relevance in the CoA, compared to 92% and 96% for CoT. These results highlight strong alignment between sub-queries, decisions, and outcomes. We are also exploring automated evaluation methods for relevance and correctness to improve scalability. **These updates have been included in the revised version**, and we hope they address your concerns. Thank you again for your valuable feedback!\\n\\n---\", \"weakness_4\": \"> The \\\"plug-and-play\\\" feature is presented as a low-cost prompting strategy; however, integrating retrieval for each data type (e.g., web, internal knowledge) may not be straightforward. It may be worth reconsidering or refining this claim to better reflect its implementation complexity.\", \"answer_4\": \"Thank you for your valuable feedback. We agree that integrating retrieval for different data types is indeed not straightforward, and we appreciate the opportunity to clarify this point. **In the introduction section of the revised version of the paper, we have refined the claim regarding the \\\"plug-and-play\\\" feature to provide a more accurate description**. Specifically, we now state that: \\u201c*The term 'plug-and-play' refers to the ability to freely add or remove pre-designed actions, such as the three different actions implemented in our work. However, for any new action to be integrated in the future, careful design and adjustment will be required to ensure compatibility with the framework's input and output formats.*\\u201d We believe this revision more accurately reflects the complexity of implementation while maintaining the core idea of flexibility in extending the framework. Thank you again for your thoughtful suggestion, which has helped improve the clarity of our work.\"}",
"{\"title\": \"Reply to Weakness 1, 2, 3 and Question 3\", \"comment\": \"Dear Reviewer wYMh,\\n\\nThanks so much for your commending our innovative Chain of Action (CoA) mechanism, which leverages multimodal LLM capabilities to integrate actions like web querying and data analysis for QA tasks, noting its significant performance improvements over traditional reasoning methods and its potential to advance QA capabilities. And here are our reply to your comments:\\n\\n---\", \"weakness_1\": \">Based on the number of actions to be taken and what kind of \\\"plug\\\" is used for the action, the time taken to finish all actions and send out an answer might become significant....\", \"answer_1\": \"Thank you for raising this important point. We completely agree that incorporating a study on latency is essential to highlight the advantages of our method. **In the revised version, we have updated Table 3 to include detailed input and output token consumption, as well as the average latency for all baselines.** These results demonstrate that our method effectively leverages the parametric knowledge of large models and employs MRFS for verification, significantly reducing the usage of LLM tokens and minimizing unnecessary retrievals. For fairness and validity, we used the same vector-based retrieval process for similar passages as the baselines. In our Web3 QA case study, we further optimized the retrieval process through engineering efforts like parallel processing, which greatly reduced the latency for each response. This optimization ensures that our method meets the real-time demands of users seeking answers about rapidly changing markets. We hope these additions address your concerns and provide a clearer understanding of our system's efficiency. Thank you again for your thoughtful feedback!\\n\\n---\", \"weakness_2\": \">It would be helpful to conduct an ablation study when you remove specific action types to ...\", \"answer_2\": \"Thank you for your helpful suggestion. **We have addressed this in the revised version by adding more results to Table 2, specifically showing the performance impact when Action 1 (web search) and Action 2 (local knowledge base search) are removed**. From the results, it is evident that the improvement brought by web search is more significant compared to local knowledge base search, especially for non-factual and open-ended QA scenarios. These insights further clarify the relative contribution of each action type to the overall performance. We appreciate your feedback in helping us enhance the analysis!\\n\\n---\", \"weakness_3\": \">Comparing CoA with and without the ability to perform additional \\\"plugs\\\" across different types of questions ...\", \"answer_3\": \"Thank you for your thoughtful question. The impact of CoA on latency depends on the context. Compared to basic baselines that rely solely on the capabilities of large language models (LLMs) for QA, CoA does introduce additional latency due to external data retrieval and processing. However, existing research has consistently shown that relying solely on LLMs is insufficient for addressing many real-world problems, such as small-scale events or real-time information. Therefore, our primary comparison is against RAG-based baselines. **As shown in Table 3 of revised version, our experiments demonstrate that CoA significantly reduces various metrics compared to other RAG baselines, including overall latency, the number of LLM calls, and the token costs for LLM input and output.** These results highlight that by simply detecting the knowledge boundaries of LLMs, leveraging their parametric knowledge, and verifying with our MRFS, CoA can substantially reduce the need for external data retrieval, making the overall process more efficient and cost-effective. We hope this addresses your concern and provides clarity on the advantages of our approach. Thank you again for your valuable feedback!\", \"question_3\": \">Does CoA add significant latency to QA process?\"}",
"{\"comment\": \"Thank you for your valuable feedback and for highlighting the importance of the experiments mentioned in \\\"Answer 3.\\\" We are pleased to inform you that we have already incorporated these details into the paper to provide a more comprehensive understanding of where the real gains are coming from. Specifically, the additional content is included in Section 3.1.1, which discusses the performance of CoA with and without retrieval across various question types:\\n\\n***Table 2 also shows that CoA's external \\\"plugs\\\" significantly enhance performance on average 15.3% in complex scenarios, such as open-domain questions requiring cross-domain knowledge transitions (e.g., QReCC) and long-form questions that demand detailed, structured answers. In contrast, the improvements are relatively smaller for simpler, commonsense questions (7.2% on average), where the LLM's parametric knowledge is usually sufficient. This result highlights CoA's strength in addressing complex problems and its ability to extend the model's capabilities effectively.***\\n\\nUnfortunately, as the rebuttal phase does not allow for uploading a revised version of the paper, we are unable to share the updated version at this time. We hope for your kind understanding, and we assure you that the updated version will be available in the next revision.\\n\\nThank you again for your insightful comments and support.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"General Rebuttal/Revision Response - Continued 2\", \"comment\": [\"We addressed the concern about CoA's ability to handle tasks involving higher-order reasoning or complex multi-step dependencies in response to Question 4. This discussion explores how CoA is structured to manage intricate reasoning paths, balancing modular design with enhanced reasoning capabilities.\", \"We clarified that CoA currently supports tabular and text data modalities and has been adapted for visual data in our Web3 QA use case. Specifically, we incorporated Vision-Language Models (VLMs) to process cryptocurrency market charts, combining visual insights with local K-line data for enhanced answer reliability. CoA\\u2019s decomposition strategy enables seamless integration with VLMs, dynamically generating action chains to address visual or mixed data queries effectively.\", \"We explained that CoA mitigates discrepancies by aggregating content from top-k sources and verifying responses using the Multi-Reference Faith Score (MRFS) to ensure consistency and reliability. Future work will explore advanced methods like voting mechanisms or iterative reasoning to further enhance CoA's ability to reconcile conflicting information while balancing accuracy and latency.\", \"We discussed that CoA is already applied in real-time scenarios, such as Web3 QA, where rapidly changing information is prevalent. Optimization efforts like parallelization and fuzzy search enhance retrieval efficiency, as shown in Table 3, where CoA demonstrates superior latency and cost performance compared to RAG-based frameworks. Future work includes building a real-time benchmark to evaluate CoA's effectiveness in dynamic environments.\", \"We highlighted that CoA can handle intricate reasoning paths through an iterative generation mechanism that refines action chains dynamically. While currently limited to single-generation for efficiency, we included an iterative approach in the appendix as a future direction to enhance CoA's capability for more complex applications without incurring excessive latency.\"]}",
"{\"title\": \"Follow-Up on Rebuttal Phase Feedback\", \"comment\": \"Dear Reviewers,\\n\\nThank you once again for your valuable feedback. We have carefully addressed your comments and made substantial revisions to improve the manuscript.\\n\\nAs the discussion phase nears its conclusion, please don\\u2019t hesitate to let us know if you have any further questions or concerns. Wishing you a wonderful Thanksgiving, if you celebrate it, and thank you for your time and consideration.\\n\\nBest regards,\\nAuthors\"}",
"{\"title\": \"Follow-Up on Rebuttal Phase Feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you once again for reviewing our paper and providing such valuable and constructive feedback. We have carefully studied your suggestions and made several revisions, adding extensive experimental details to enhance the paper\\u2019s clarity, depth, and contributions. Your insightful comments have been instrumental in guiding these improvements.\\n\\nWe sincerely hope that if you have any further questions or concerns, you will not hesitate to let us know. We are more than willing to provide additional clarifications and supporting materials to ensure the paper meets the highest standard. Your insights are crucial for refining our research and ensuring its relevance and impact.\\n\\nAdditionally, we kindly hope that these updates and clarifications will encourage you to reconsider your evaluation, as they directly address your constructive feedback. Should you have any additional queries or reservations, please feel free to contact us at any time. We are fully committed to addressing all concerns to your satisfaction.\\n\\nThank you once again for your invaluable time and support.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Follow-Up on Rebuttal Phase Feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you once again for reviewing our paper and providing such valuable and constructive feedback. We have carefully studied your suggestions and made several revisions, adding extensive experimental details to enhance the paper\\u2019s clarity, depth, and contributions. Your insightful comments have been instrumental in guiding these improvements.\\n\\nWe sincerely hope that if you have any further questions or concerns, you will not hesitate to let us know. We are more than willing to provide additional clarifications and supporting materials to ensure the paper meets the highest standard. Your insights are crucial for refining our research and ensuring its relevance and impact.\\n\\nAdditionally, we kindly hope that these updates and clarifications will encourage you to reconsider your evaluation, as they directly address your constructive feedback. Should you have any additional queries or reservations, please feel free to contact us at any time. We are fully committed to addressing all concerns to your satisfaction.\\n\\nThank you once again for your invaluable time and support.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"comment\": \"Thank you for the detailed response. I am satisfied the additional studies performed. I would recommend adding details about experiments mentioned in \\\"Answer 3\\\" to the paper. This study is crucial to understand where the real gains are coming from.\"}",
"{\"title\": \"General Rebuttal / Revision Response\", \"comment\": \"Dear Reviewers,\\n\\nWe thank the reviewers for the insightful questions and reviews. Your time and effort dedicated to improving our work are truly appreciated.\\n\\nWe have done all the experiments suggested and answered all the questions. All modifications are marked in red color.\", \"major_revisions_include\": \"**1. New ablation study different action influence on CoA performance in Table 2**. `reviewer wYMh` `reviewer mjpZ`\\n\\n- Conducted an ablation study different action influence on CoA performance\\n- Results: each module is important to the method performance and each module shows better performance than other baselines.\\n\\n**2. New ablation study on Latency and LLM usage per question in Table 3.** `reviewer wYMh` `reviewer mjpZ`\\n\\n- Conducted an ablation study to evaluate the CoA Latency and input and output tokens with baselines\\n- Results: our method show a less latency and less LLM usage compared with most of baselines.\\n\\n**3. New experiment on performance without tabular data in Table 6.** `reviewer mjpZ`\\n\\n- Conducted an ablation study to performance without tabular data\\n- Results: From the updated results, we observe a significant decline in the coverage and overall quality metrics without the inclusion of real-time market prices, highlighting the lack of truly useful references. However, the non-redundancy and readability metrics show negligible differences.\\n\\n**4. New expert recruitment process and questionaire in Appendix F**: additional expert evaluator for human evaluation `reviewer mjpZ`\\n\\n**5. Revised abstract**: make the description of the methodology more clear. `reviewer mjpZ`\\n\\n**6. Revised Caption in Figure 1**: `reviewer mjpZ`\\n\\n**7 . Revised in Sec 1**: make the description of the methodology more clear and reclassify the contribution of our work `reviewer mjpZ`\\n\\n**8. Revised in Sec 2**: make the description of the methodology more clear `reviewer mjpZ`\\n\\n**9. Revised metrics in Sec 3.1**: update the definition of ROUGE and ROUGE-L metric used in our paper `reviewer mjpZ`\\n\\n**10. Revised Implmentation in Sec 3.1**: refined the claim regarding the \\\"plug-and-play\\\" feature to provide a more accurate description `reviewer mjpZ`\\n\\n**11. Revised Caption in Table 3**: add the new experiment of the latency and input &ouput tokens of CoA `reviewer wYMh` `reviewer mjpZ`\\n\\n**12. Revised Caption in Table 6**: add the new experiment of the performance without tabular action `reviewer mjpZ`\\n\\n\\nWe hope these revisions address the reviewers\\u2019 concerns and improve the overall quality of our paper.\\n\\nThank you again for your review!\\n\\nBest regards,\\n\\nAuthors\\n\\n---\\n\\nBelow, we also summarize the key points in our responses:\\n\\n### Key Points in Our Responses\\n\\n**Reviewer mjpZ**\\n\\n* We addressed the concern regarding the accuracy of terminology and methodology descriptions by refining ambiguous terms like \\\"multimodal\\\" and \\\"actions workflow.\\\" For example, we replaced \\\"multimodal data\\\" with \\\"heterogeneous data\\\" and clarified the meaning of \\\"knowledge boundary\\\" in response to Question 3. These revisions aim to improve the paper's precision and align terminology with its intended meaning, ensuring clarity for readers.\\n* We clarified the decomposition strategies critical to our framework by comparing CoA with established baselines in Table 2. Additionally, Appendix D provides a case study to illustrate the distinctions between CoA and CoT. Our analysis of 50 Social QA dataset questions showed CoA achieved 96% correctness and 98% relevance, outperforming CoT. These updates demonstrate the framework's effectiveness and strengthen the contributions of the paper.\\n* We refined the description of the \\\"plug-and-play\\\" feature by specifying that, while pre-designed actions can be easily added or removed, integrating new actions requires careful design to maintain compatibility with input-output formats. This clarification, included in the introduction, provides a more accurate representation of the feature\\u2019s complexity while retaining its flexibility.\\n* We conducted an ablation study to evaluate the specific contribution of tabular data to our framework. The results, presented in Table 6, show that excluding tabular data reduces coverage and quality metrics, emphasizing its importance in retrieving real-time market data. However, non-redundancy and readability metrics were minimally affected. These findings clarify tabular data's role and address the concern about its impact.\"}",
"{\"comment\": \"I appreciate the effort you have put into revising the paper. As all of my concerns have been resolved in the revised version, I have increased my score to 8.\"}",
"{\"summary\": \"The authors propose a new QA retrieval mechanism called Chain of Action(CoA). When a question is asked to an LLM, there is a prompt which generates a list of actions the LLM needs to take first to effectively answer the questions. They introduce a Plug and Play approach where in case of Multimodal LLMs, the actions taken can be integrated into the application. The actions can be web query or data analysis. The paper integrates 3 such actions. The LLM then performs each of the individual action generated and then there is another query which combines information from all the actions. The LLM then gives an answer based on the newly injected information\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors utilize newer multimodal LLM abilities to perform actions such as web query and data analysis. The authors come up with a new QA mechanism for LLMs which uses the actions The method is called Chain of Action(CoA).\\n2. The authors demonstrate that this method significantly outperforms other reasoning and QA methods on many QA datasets. \\n3. The improvement of using actions over thoughts does seem to be the natural way of solving a question. This approach has significant potential for improving QA capabilities of LLMs.\", \"weaknesses\": \"1. Based on the number of actions to be taken and what kind of \\\"plug\\\" is used for the action, the time taken to finish all actions and send out an answer might become significant. It would have been good to see the study on latency(eg. average response time) of the system because of the new method.\\n2. It would be helpful to conduct an ablation study when you remove specific action types to isolate their impact on performance. This would provide clearer insights on how much this method relies on additional capabilities. \\n3. Comparing CoA with and without the ability to perform additional \\\"plugs\\\" across different types of questions can be useful in understanding the impact of this method.\", \"questions\": \"1. Can you elaborate on the key differences between \\\"thoughts\\\" in CoT and \\\"actions\\\" in CoA? How does this change improve the overall performance? It would also be helpful if you can discuss the limitations and trade-offs between them.\\n2. If the system doesn't have the ability to add additional actions like web query, does CoA still perform better than CoT. \\n3. Does CoA add significant latency to QA process?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces the Chain-of-Action (CoA) framework, a novel approach to multimodal and retrieval-augmented question answering that enhances the faithfulness and reasoning quality of large language models (LLMs). CoA addresses key challenges in QA, such as unfaithful responses and weak reasoning, by decomposing questions into a series of reasoning steps or actions that systematically retrieve and verify information from various sources. The framework introduces three \\\"Plug-and-Play\\\" actions\\u2014web querying, knowledge encoding, and data analyzing\\u2014that support multimodal data integration. Additionally, a multi-reference faith score (MRFS) is proposed to resolve inconsistencies and improve response accuracy. Experimental results demonstrate CoA\\u2019s effectiveness in handling complex questions across QA benchmarks and in real-world applications, particularly in the Web3 domain.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This study introduces a framework embodying the divide-and-conquer approach, effectively breaking down complex tasks into manageable components that are tackled sequentially. This structure enhances the model's ability to handle multifaceted queries with improved precision.\\n\\n2. The empirical results demonstrate notable improvements in both performance and efficiency, as reflected in reduced API calls and token usage compared to prior methods. These gains underscore the framework\\u2019s effectiveness and potential for cost-saving in real-world applications.\\n\\n3. The introduction of the multi-reference faith score (MRFS) is a contribution, which effectively identifies and mitigates information conflicts, and improves answer reliability and trustworthiness in real-time applications.\", \"weaknesses\": \"1. The paper\\u2019s primary weakness lies in how it presents its key concepts and narrative. Many claims, such as \\\"multimodal,\\\" \\\"plug-and-play,\\\" and \\\"action-based\\\" elements, lack direct evidence or clear definitions, making it challenging to follow the core contributions. Though the pipeline is straightforward, understanding the study's actual workflow is hindered by (1) inaccurate terminology, (2) loosely connected methodology descriptions, and (3) a mix of abstract workflows and technical details.\\n\\n2. Certain terms are uncommon or seem misapplied, which leads to confusion. For example, terms like \\\"multimodal\\\" (when referring to text and tabular data), \\\"chain-of-action\\\" (more of a \\\"chain-of-data-collection-and-verification\\\"), \\\"actions design\\\" (data collection), \\\"actions workflow\\\" (data verification), \\\"node\\\" (sub-question), and \\\"knowledge boundary\\\" (what a model actually knows) lack clarity and could benefit from more precise definitions or alternatives.\\n\\n3. Question decomposition appears critical to this framework, yet there is limited discussion on decomposition strategies or comparisons with existing baselines. Further elaboration here would strengthen the paper's contributions.\\n\\n4. The \\\"plug-and-play\\\" feature is presented as a low-cost prompting strategy; however, integrating retrieval for each data type (e.g., web, internal knowledge) may not be straightforward. It may be worth reconsidering or refining this claim to better reflect its implementation complexity.\\n\\n5. The paper\\u2019s claim of multimodal data handling is unclear. If the input consists of real-time information, domain knowledge, and tabular data, it may be more accurately described as handling heterogeneous data rather than multimodal data. Additionally, if tabular data is linearized as text for LLM input, the fundamental multimodal claim weakens.\\n\\n6. The study does not include ablations to show the specific contribution of tabular data. Providing such analyses could clarify its impact on the framework's performance.\\n\\n7. Section 3.2 mentions expert evaluation on a 1 to 3 scale based on three criteria, but it lacks details on the expert recruitment process, qualifications, and any inter-rater reliability metrics. Adding these details would increase the transparency and credibility of the evaluation process.\", \"questions\": \"1. Could you clarify what \\u201cimputation\\u201d refers to in Table 2? Are there results available for CoA without MRFS, and what does \\u201cw/ ROUGE\\u201d mean? My understanding was that ROUGE is used only in ASQA.\\n\\n2. In Table 3, could you provide separate statistics for input and output tokens, as well as the average token usage per action? This would help readers better understand the specific cost details.\\n\\n3. Could you elaborate on what is meant by the term \\u201cknowledge boundary\\u201d?\\n\\n4. Are the results of the Chain-of-Action framework directly comparable to previous studies? I noticed that this study used GPT-4, while DSP and SearchChain relied on older-generation LLMs (text-davinci-002 and gpt-3.5-turbo, respectively).\\n\\n5. Would it be fair and perhaps clearer to rename Sections 2.2.1 and 2.2.2 as \\\"Data Collection\\\" and \\\"Data Verification,\\\" instead of \\u201cActions Design\\u201d and \\u201cActions Workflow\\u201d? These alternative terms seem easier to understand and align well with the content of the corresponding subsections.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
1AYrzmDK4V | Watermark Smoothing Attacks against Language Models | [
"Hongyan Chang",
"Hamed Hassani",
"Reza Shokri"
] | Statistical watermarking is a technique used to embed a hidden signal in the probability distribution of text generated by large language models (LLMs), enabling the attribution of the text to the originating model. We introduce the smoothing attack and show that existing statistical watermarking methods are not robust against minor modifications of text. In particular, with the help of a weaker language model, an adversary can smooth out the distribution perturbation caused by watermarks. The resulting generated text achieves comparable quality to the original (unwatermarked) model while bypassing the watermark detector. Our attack reveals a fundamental limitation of a wide range of watermarking techniques. | [
"LLM Watermark"
] | Reject | https://openreview.net/pdf?id=1AYrzmDK4V | https://openreview.net/forum?id=1AYrzmDK4V | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xeOLLzV6nD",
"wzjDTw8cbD",
"vVZ5LYb1d5",
"tAsxkAOkaF",
"tA0tzMv10G",
"sdpvNxcTEo",
"piuJ8l4iqn",
"nf18tmAmqC",
"nViDNQi9nB",
"kAoCY43Ln9",
"jjvEeH05kl",
"hqmATvXzyd",
"efpAVxCC5C",
"XmL9CBP91j",
"ULkpg2rv3U",
"TdIMt2V76X",
"P9ayoUxK43",
"NZircXOUJk",
"Mg11d3D1Pb",
"JBVwhSSTEe",
"DvspChM4Hu",
"1kupy2wqQM"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732252161960,
1732250334190,
1732252820481,
1730700877405,
1730166652029,
1732662716033,
1732693833648,
1732251671539,
1734820168843,
1737524246852,
1732252797029,
1732257127766,
1733204418072,
1732633652944,
1730487135041,
1732251715633,
1730252816514,
1733113980831,
1732741374344,
1733031940551,
1732249378615,
1732249768634
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_f25P"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_NfPb"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_f25P"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Area_Chair_Zijr"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_a4FL"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_a4FL"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_RzfD"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_RzfD"
],
[
"ICLR.cc/2025/Conference/Submission13239/Reviewer_f25P"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13239/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer RzfD\", \"comment\": \"**W1 (weaker correlation when k is small):**\\n> The significance level St is unobserved and was estimated using a surrogate quantity, ct. Though the authors showed that there is generally a negative correlation between cy and St, this is only a weak justification. It is possible that a small c would correspond to a large St in some situations, e.g., when K is small.\\n\\n**Response:**\\nWe appreciate the reviewer for raising this concern.\\nIndeed, the significance level was estimated based on a surrogate quantity and there could be errors in this estimation. In our design, we circumvent this issue by resorting to a *soft assignment of token distributions:* when the estimated significance level is high, it is *more likely (but not deterministic)* that we sample from the watermark-free reference model, and vice versa. Perhaps due to this design, our method does not suffer much from the aforementioned estimation error and performs the best among existing competitors (except for the unrealistic paraphrase attack using GPT4). We will leave further improvement as a future work direction.\\n\\n---\\n\\n**W2 (distortion-free watermarking schemes):** \\n> The method only applies to the \\u201cgreen-red list\\u201d watermarking scheme, which is known to be biased because it does not preserve the original text distribution. In contrast, there are unbiased watermarking methods (e.g., Kuditipudi et al., 2023; Aaronson, 2023). It is unclear if the proposed method applies to unbiased watermarking schemes. Perhaps the authors can provide more discussions about how their method might be adapted or extended to work with unbiased watermarking schemes.\\n\\n\\n**Response:**\\nKuditipudi et al., 2023; and Aaronson, 2023 do not alter the token distributions of the original model. Such watermark schemes are trivial to attack under our threat model, where the adversary can observe the top-K probabilities and the corresponding tokens (this information is in general accessible to users, e.g., via OpenAI\\u2019s API). As the adversary can observe the unaltered probabilities, effectively, he can directly sample tokens from the unwatermarked model, obtaining the unwatermarked output (this problem is trivial). Hence, we focus on watermark schemes that do not preserve the original probability distributions. We have added a clarification to this issue and adjusted our claims accordingly in the revised version (see Appendix A.5). \\n\\n---\\n\\n**W3 (theoretical analysis of text quality):** \\n> The paper lacks a rigorous theoretical analysis of the effect of the smooth attack on the text quality, e.g., bounds on how much the smoothing attack can affect certain text quality metrics.\\n\\n\\n**Response:**\\nWe acknowledge the reviewer\\u2019s concern, but this does not demerit the contribution of our work. \\n\\nOur work is not focused on theoretical analysis; rather, it provides a practical attack that is good enough to demonstrate the limitations of the current ``Green-red list'' statistical watermarking schemes, calling for new watermark techniques. \\n\\nBesides, sometimes our attack generates texts of higher quality than the target watermarked model, especially when the distortion caused by watermarks is significant (See Figure 5 in the paper).\\n\\n---\\n\\n**Q1 (About quality metric):** \\n> In Table 1, Watermark (smoothing) has a lower perplexity than Watermark (or even Unwatermark) in some cases (e.g., Llama2-7b). In other words, the attack can even improve the quality of the text, which seems counterintuitive as the reference model is weaker. This also raises a concern about whether perplexity is the right measure to look at the quality of a text here. The authors may want to include other text quality metrics in the numerical studies.\\n\\n**Response:**\\nWe have added the new results in response to this question. Please refer to our [general response](https://openreview.net/forum?id=1AYrzmDK4V¬eId=wzjDTw8cbD).\\n\\n---\\n\\n**Q2 (potential pitfalls):**\\n> I would like to know if the authors can discuss the potential pitfalls of their methods, e.g., provide concrete examples or scenarios where their smooth attack might fail, and discuss the implications of such failures\\n\\n**Response:**\\nOur attack may encounter challenges in scenarios where the adversary has access to only a limited number of token probabilities (i.e., reducing the value of K), causing large errors in the estimate of the significance level (as the reviewer pointed out in the first question). However, as we have mentioned, our attack circumvents this pitfall by using a soft assignment of token distributions, and performs considerably well under most experimental setups (we set K=20, which seems to be a valid assumption so far). We have included a discussion on this limitation in the revised version (see Appendix A.7).\\n\\nPlease also refer to our response for the [potential defense](https://openreview.net/forum?id=1AYrzmDK4V¬eId=TdIMt2V76X)\"}",
"{\"title\": \"On the metric of text quality\", \"comment\": \"Several reviewers (Reviewer NfPb, Reviewer a4FL, and RzfD) have questioned the use of perplexity as a quality metric.\\n\\nTo explain, perplexity (PPL) is widely used in the text watermarking literature (e.g., [1,2,3,4,5,6]) and serves as a standard metric for evaluating text quality. We followed the literature and included perplexity as a metric for quality evaluation. We are aware of alternative metrics for quality measurement, but finding the universally best quality metric is still an open problem in the NLP literature [7]. In what follows, we discuss several alternative metrics suggested by the reviewers and their inapplicability to our work.\\n\\n- **P-SP** (suggested by Reviewer a4FL): P-SP metric measures cosine similarity between the embeddings of **two texts** (one of them is regarded as the reference text). It was originally proposed under a different context than watermark detection, e.g., evaluating the quality of a translated/paraphrased sentence. If the given sentence looks/means more similar to the ground truth, then P-SP gives a higher score. In our context, there is no reference text to compare with. Instead, an LLM could be used for answering a question to which there is no single deterministic answer, e.g., explaining why a certain company (e.g., Intel) is good. In such scenarios, answers that look very different may be of high quality at the same time, e.g., both \\\"Intel pays well\\\" and \\\"Intel's CPU plays an important role in commercial computers\\\" are valid answers. \\n \\n- **BLEU** (suggested by Reviewer NfPb): Similar to P-SP, this metric assumes the existence of a reference text and evaluates its word-level similarity (counting overlaps over n-grams) with a given text. Hence, this metric does not seem to be a proper metric in our setting.\\n\\nWe also run additional experiments using **GPT-4 as the quality metric** to evaluate the generated texts on a scale from 1 to 10 (higher means better). The prompt template is similar to [8] (see Appendix A.4 of the revised version). We present the new results below (also included in Table 1 of the revised version).\\n\\n| Setting | OPT-1.3b (KGW) | OPT-1.3b (Unigram) | Llama2-7b (KGW) | Llama2-7b (Unigram) |\\n|--------------------------|--------------|------------------|---------------|-------------------|\\n| Human-written | 8.83 | 8.83 | 8.83 | 8.83 |\\n| Reference | 7.16 | 7.16 | 3.3 | 3.33 |\\n| Unwatermarked | 8.66 | 8.66 | 8.66 | 8.66 |\\n| Watermarked | 8.33 | 7.66 | 8.28 | 8.33 |\\n| Watermarked (P-GPT3.5) | 8.66 | 8.33 | 8.83 | 9.0 |\\n| Watermarked (Word-D) | 2.0 | 2.0 | 2.48 | 2.17 |\\n| Watermarked (Word-S) | 2.66 | 2.66 | 3.81 | 4.0 |\\n| Watermarked (Smoothing) | 7.33 | 7.33 | 5.25 | 4.5 |\\n\\nOur attack achieves comparably good text quality, which is consistently better than the reference model and the Word-D and Word-S attacks that do not use reference models. Notably, on Llama2-7b, the text quality of our attack is much better than the above baselines, and is *only slightly worse* than the original watermarked/unwatermarked models and the unrealistic paraphrasing attack P-GPT-3.5. We note that the evaluation by GPT-4 also has limitations [9, 10, 11,12], e.g., it may be biased toward its own responses [8]. Addressing these limitations is an avenue for future work. \\n\\nWe would like to thank the reviewer for the instructive comments.\"}",
"{\"title\": \"Response to Reviewer NfPb -- Continued\", \"comment\": \"---\\n\\n**W5 (z-score threshold and TPR)**: \\n> The choice of z-score threshold used in the experiments is unclear. It would be more straightforward to present the true positive rates at specific theoretical false positive rates, providing a clearer understanding of the method\\u2019s performance.\\n\\n**Response:** Sorry for the confusion. The z-score thresholds used in our experiments are presented in Section 4.1 and detailed in Appendix A2 (see page 14). \\n\\nFor clearer presentations, we have also reported the Positive Prediction Rate (PPR), which measures the fraction of test inputs that are predicted as watermarked by the watermark detector. From Tables 1 (page 8) and 2 (page 17), it is clear that our attack consistently outperforms almost all other attacks by achieving lower PPRs (except sometimes outperformed by the impractical paraphrasing attack with GPT4). \\n\\nIn what follows, we provide a detailed explanation of why lower PPRs are better.\\n\\nThere are two types of texts, positive samples (those generated from the watermarked model with/without attacks) and negative samples (those written by humans). Considering positive samples (i.e., texts generated from the watermarked model with/without attacks), PPR is computed as the fraction of these positive samples correctly identified as positive/watermarked by the detector, which reflects the *True Positive Rate*. For negative samples (text generated from unwatermarked models or human written text), the PPR is computed as the fraction of these negative samples misidentified as positive/watermarked by the detector, which reflects the *False Positive Rate*. Hence, a lower PPR indicates a stronger attack against watermark models. \\n\\nWe have added the above clarification in the revised version (Section 4).\\n\\n---\\n\\n**W6 (setups for XSIR and SIR):** \\n> The experimental settings for certain tests are suboptimal. For instance, in Table 2, the z-score for XSIR and SIR is too low, indicating that the watermark strength in the original watermarked model is insufficient.\\n\\n**Response:** We believe this is a misunderstanding. Indeed, it is not sufficient to focus on the z-score metric alone (also see the above response). That is exactly why we have also included the PPR as a metric. \\n\\nFor the XSIR and SIR methods, their PPRs are both 0.87, meaning that the watermarks are detectable. Hence, this setup is not suboptimal. Using our attack, we managed to reduce the PPR to 0.04 (see Table 2 on page 17). Hence, our attack is effective.\\n\\n---\\n\\n**Q1 (total variation distance):** \\n> Why in Figure 1, top-p sampling (right figure) has some points with the total variation distance being 0 or 1, but top-k sampling (middle figure) does not?\\n\\n**Response:** This is a good point. This difference is attributed to the different natures of top-p and top-k sampling. \\n\\n- In top-p sampling, we sample from the smallest set of tokens whose sum of probabilities is at least p (we set p to 0.8). Hence, it is likely that there are only a few candidates to sample from the watermarked model and the unwatermarked model, making the sampling process almost *deterministic*. As a result, the total variation distance (TVD) between the empirical distributions of the watermarked and unwatermarked models is **i**): either very large or becomes 1 (when the two candidate sets have no overlap at all); or **ii**): very small or becomes 0 (when the two candidate sets are almost identical, or both containing the same one candidate).\\n- In top-k sampling, we sample from the tokens of the top-K probabilities. When we set k to 20, there are more candidates to sample from, making the sampling process more stochastic/less deterministic. As a result, the above scenarios **i**) and **ii**) are not likely to happen.\\n\\nDespite the above difference, both sampling methods reflect the positive correlation between the significance level and the TVD between the token distributions of the watermarked and unwatermarked models. We can then exploit this correlation to design the attack.\\n\\n---\\n\\n**Q2 (number of prefixes):** \\n> How many queries (prefixes) do you use for computing the bin index as described in Lines[261-266]?\\n\\n\\n**Response:** Thanks for asking. 200 prefixes are used to compute the upper and lower bounds for constructing the bins. Note that all 200 prefixes are obtained as **one response** output by the target watermarked model, and the upper and lower bounds can be reused by our attack on different texts. Hence, **this cost is negligible**.\"}",
"{\"summary\": \"The paper proposes an automatic method for editing watermarked text from a language model to evade watermark detection using another (weaker) language model. The paper mainly considers the \\\"red-green list\\\" watermark of Kirchenbauer et al. and variants thereof, though the techniques should presumably generalize to other watermarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper proposes a heuristic to estimate which tokens contribute the most to the overall watermark signal and removes the watermark by editing these tokens using another language model. The idea is interesting, and the paper empirically validates the effectiveness of their attack across different watermarks, language models, and datasets. These results clearly establish the effectiveness of the attack in practice.\", \"weaknesses\": \"The paper distinguishes its main contributions from prior work by arguing that prior work on automatically removing watermarks involved using language models that were at least as strong as the original watermarked language model. However, one notable exception is the work of Zhang et al. [1], who seem to also focus on removing watermarks using weaker language models. This work is cited in the present paper but not discussed in any detail. It would be great if the authors can update their paper with a discussion of how their work differs from [1]. Otherwise, the novelty/significance of the main contributions over prior work is not clear.\\n\\n\\n[1] Zhang et al. (2023) Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models. https://arxiv.org/abs/2311.04378\", \"questions\": \"What are the main differences between this work and that of Zhang et al. (2023)? (see Weaknesses section)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces a \\\"smoothing attack\\\" that bypasses statistical watermarking in large language models (LLMs). By blending outputs from the watermarked model with a weaker reference model, it removes watermarks without impacting text quality on PPL.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The writing is easy to follow.\\n\\n2. Propose a smoothing attack scheme against statistical watermarking, and show that the significance level $S_t$ is highly correlated with the total variation distance.\", \"weaknesses\": \"1. The applicability of this method is limited, as obtaining a high-quality reference model is often not possible (e.g. for GPT-4). Additionally, it requires access to token logits, meaning it is not a purely black-box approach as claimed.\\n\\n2. In line 146. The authors overclaim that their attack is universally applicable to all statistical watermarking schemes. However, many watermarking schemes [1,2,3] do not use a green list, and their proposed method cannot be applied.\\n\\n3. Additional metrics are needed to better reflect the quality of the generated text. PPL tends to favor a distribution similar to that of the oracle model, which can introduce bias. It would be more informative to include straightforward metrics, such as BLEU in machine translation, to provide a clearer evaluation.\\n\\n4. The paper lacks key baseline results needed to demonstrate the effectiveness of the proposed method. Naive smoothing using $\\\\lambda \\\\tilde{P}(x)+(1-\\\\lambda) P^{ref}(x)$ can also remove the watermark while preserving part of the text quality.\\n\\n5. The choice of z-score threshold used in the experiments is unclear. It would be more straightforward to present the true positive rates at specific theoretical false positive rates, providing a clearer understanding of the method\\u2019s performance.\\n\\n6. The experimental settings for certain tests are suboptimal. For instance, in Table 2, the z-score for XSIR and SIR is too low, indicating that the watermark strength in the original watermarked model is insufficient.\\n\\n[1] Kuditipudi, R., Thickstun, J., Hashimoto, T. and Liang, P., 2023. Robust distortion-free watermarks for language models. arXiv preprint arXiv:2307.15593.\\n\\n[2] Hu, Z., Chen, L., Wu, X., Wu, Y., Zhang, H. and Huang, H., 2023. Unbiased watermark for large language models. arXiv preprint arXiv:2310.10669.\\n\\n[3] Dathathri, S., See, A., Ghaisas, S., Huang, P.S., McAdam, R., Welbl, J., Bachani, V., Kaskasoli, A., Stanforth, R., Matejovicova, T. and Hayes, J., 2024. Scalable watermarking for identifying large language model outputs. Nature, 634(8035), pp.818-823.\", \"questions\": \"1. Why in Figure 1, top-p sampling (right figure) has some points with the total variation distance being 0 or 1, but top-k sampling (middle figure) does not?\\n2. How many queries (prefixes) do you use for computing the bin index as described in Lines[261-266]?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the clarification. It would be very valuable to include some version of this response in the main text of the paper (e.g., Related Work section) versus Appendix A.5 since Zhang et al.'s work is closely related. This addresses my questions and I will raise my score accordingly upon seeing the revised version of the paper (in particular, the revised discussion of related work).\"}",
"{\"title\": \"Revised Related Work\", \"comment\": \"Thank you for the suggestion. We have revised the related work section to clarify the key differences between our work and that of Zhang et al (See Section 5 of the revised paper). We would also like to further address any additional questions you may have.\"}",
"{\"title\": \"Response to Reviewer a4FL\", \"comment\": \"**W1 (threat model):**\\n>The proposed method relies on using the logits/output probabilities of the watermarked model. This might limit the attack to some API models that may not return the logits/probabilities or only return top-k probabilities or even calibrated probabilities.\\n\\n**Response:** \\n\\nWe believe this is a misunderstanding. Our attack relies on the *top-K probabilities* from the watermarked model instead of logits or full output probabilities for all tokens. As we have mentioned above, this information is generally accessible to the attacker/user of the model. Our attack is also not affected under standard calibration techniques, as our attack uses the *relative difference* between the probabilities of the top-1 and top-K tokens for estimating the significance level.\\n\\n---\\n\\n**W2 (quality metric):** \\n> The paper uses perplexity or loss to measure the text quality, but I think it's not enough to show the quality of the text. For example, the model can generate an answer for a math question with a very low perplexity, but the answer is completely wrong. So, I think it will be more helpful if the authors can include more text quality metrics like P-SP used in [1] or even a model-based evaluation like asking a large oracle model which generation is preferable.\\n\\n**Response:** \\n\\nWe have added the new results in response to this question. Please refer to our [general response](https://openreview.net/forum?id=1AYrzmDK4V¬eId=wzjDTw8cbD).\\n\\n---\\n\\n\\n**W3 (data):** \\n> I think it's also helpful to the paper if the answers can show the results under different data distributions instead of overall c4.\\n\\n**Response:**\\n\\nThank you for your insights. We agree that it is an interesting research direction to explore the effect of watermark defense/attack schemes under different data distributions. However, given the comprehensive nature of the watermark algorithms we have evaluated, we primarily focused on the commonly used benchmark datasets [1, 2, 3, 4, 5]. Further exploration in individual data domains is beyond the scope of this work.\\n\\n---\\n\\n**Q1 (a new baseline and more experiment results):** \\n> Can the authors provide a baseline that uses the local reference model to do the paraphrase attack?\\n\\n\\n**Response:**\\nSure. The suggested baseline, according to our newly added experiments (see below), is less effective than the baseline which queries the reference model directly. \\n\\nIn particular, we use the reference model to phrase the text generated from the watermarked model, using the prompt \\u201cRewrite the text in the parenthesis (<WATERMARKED TEXT>):\\u201d. In the table below, we compare the results between using the reference model directly and using the reference to paraphrase the text from the watermarked OPT-1.3b.\\n\\n| | | Z-score | PPR | PPL |\\n| --- | --- | --- | --- | --- |\\n| KGW | Reference model | 0.21 | 0.0 | 19.75 |\\n| | Paraphrasing using reference model | 1.671 | 0.15 | 42.109 |\\n| Unigram | Reference model | -0.07 | 0.00 | 19.51 |\\n| | Paraphrasing using reference model | 0.73 | 0.05 | 37.64 |\\n\\nAs we can see, this baseline achieves lower quality (PPL) and higher detection rate (PPR) and z-score (easier to detect). Overall, using the reference model to paraphrase watermarked text is worse than querying the reference model directly.\\n\\nWe would like to thank the reviewer for the instructive comments.\"}",
"{\"metareview\": \"**Paper Summary:**\\n\\nThe paper proposes an attack on distortionary text watermarks, using text from an unwatermarked weak LM to rewrite text from a strong watermarked LM in a way that erases the watermark.\\n\\n**Strengths:**\", \"the_attack_is_interesting_and_it_is_simple\": \"a token-level intervention, as opposed to more complicated paraphrasing attacks studied in prior work, e.g., Zhang et al. (ICML, 2024).\\n\\n**Weaknesses:**\\n\\nReviewers a4FL and NfPb note that the attack relies on access to top-k probabilities of the watermarked language model. These are often, but not always available.\\n\\nReviewers f25P, NfPb, and RzfD observe that this attacked is not applicable to distortion-free watermarks.\", \"additional_comments_on_reviewer_discussion\": \"Concerns were raised in discussion about the novelty of the proposed method in comparison to previously-proposed paraphrasing attack, e.g., Zhang et al. (ICML, 2024). In each case, a smaller model is used to remove a watermark from a more powerful model. This was largely addressed in discussion.\\n\\nI have lingering concerns about how Zhang et al. is addressed in the main text. E.g., around line 114:\\n\\n\\\"rules out the paraphrasing attacks that leverage a strong LM (e.g., ChatGPT) to paraphrase the watermarked text (Zhang et al., 2024a)\\\"\\n\\nI do not think this is fair to Zhang's work, which really is about by using weaker LMs to attack stronger watermarked LMs. While the revised discussion of Zhang et al. in A.5 is much appreciated and I think the main text could still be improved.\", \"concerns_about_applicability_to_distortion_free_watermarks_were_addressed_by_the_authors_definitionally\": \"> Distortion-free watermark schemes, under this setting, are vulnerable to the adversary, who can sample directly from the watermark-free token distribution. Therefore, attacking distortion-free watermarks under our setting is not an interesting/challenging task.\\n\\nI am not entirely convinced by this argument. First, to bypass the watermark in this way would require all the model's logits, not just top-k. Second, it doesn't address the point that methods like Zhang et al. really do apply to a broader context.\\n\\nI do not think that any of these concerns are fatal flaws of this paper. But I do wish these presentational issues had been more satisfactorily resolved during the reviewing period.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Response to Reviewer NfPb\", \"comment\": \"**W1 (threat model):**\\n> The applicability of this method is limited, as obtaining a high-quality reference model is often not possible (e.g. for GPT-4). Additionally, it requires access to token logits, meaning it is not a purely black-box approach as claimed.\\n\\n**Response:** We believe there is a misunderstanding.\\n\\nFirst, our attack relies on reference models that are much *weaker* than the target watermarked model. For example, when attacking Llama2-7b, we use the TinyLlama-1.3b as the reference model. The main message of our paper is that we do not need a strong adversary to break the existing statistical watermarks, which are validated by the experiment results.\\n\\nSecond, our attack relies on *probabilities from the top-K tokens* of the watermarked model rather than direct access to token logits. This information is typically accessible in black-box scenarios, such as through the OpenAI\\u2019s API (with k=20). \\n\\nTherefore, our proposed method is in general applicable, for attackers with limited capabilities aiming at black-box models.\\n\\n---\\n\\n**W2 (distortion-free watermarking schemes):** \\n> In line 146. The authors overclaim that their attack is universally applicable to all statistical watermarking schemes. However, many watermarking schemes [1,2,3] do not use a green list, and their proposed method cannot be applied.\\n\\n**Response:** Thanks for pointing this out. We are aware of distortion-free watermarking schemes, which *do not alter* the token distributions of the original model. Those algorithms introduce the watermarks during the sampling process without affecting the probability distribution, the mapping from the probability distribution to the token, without changing the probability distribution. In our setting, as the adversary can observe the probability for the top-K tokens (which is a valid assumption; see the above example of OpenAI), he can directly sample the next token based on the obtained probabilities, successfully removing the watermark. \\n\\nIn other words, attacking distortion-free watermarks is a trivial task under our threat model (hence, we attack non-distortion-free schemes). We have added a clarification to this issue and adjusted our claims accordingly in the revised version (see Appendix A.5). \\n\\n**W3 (quality metric):** \\n> Additional metrics are needed to better reflect the quality of the generated text. PPL tends to favor a distribution similar to that of the oracle model, which can introduce bias. It would be more informative to include straightforward metrics, such as BLEU in machine translation, to provide a clearer evaluation. \\n\\n**Response:** We have added the new results in response to this question. Please refer to our [general response](https://openreview.net/forum?id=1AYrzmDK4V¬eId=wzjDTw8cbD).\\n\\n---\\n\\n**W4 (naive smoothing baseline):** \\n> \\u2022 The paper lacks key baseline results needed to demonstrate the effectiveness of the proposed method. Naive smoothing using $\\\\lambda \\\\tilde{P}(x) + (1- \\\\lambda)P^{ref}(x)$ can also remove the watermark while\\npreserving part of the text quality.\\n\\n**Response:** We appreciate the reviewer\\u2019s intuition. The suggested attack indeed was considered once as a candidate, but there are several caveats to running this attack in practice. \\n\\n1. It is difficult to obtain $P(x)$ of the target model. In our setting, the attacker only has access to the top-K probabilities and the corresponding tokens, but not the whole probability distribution.\\n2. Even if 1. is possible, the combination of the probability distributions of the target model and the reference model together may not be a valid probability distribution in the first place, since they could be defined on different spaces (i.e., the tokenizers may be different). Our solution avoids this issue by either sampling from the target model or the reference model.\"}",
"{\"title\": \"On the metric of text quality -- Continued\", \"comment\": \"[1] Kirchenbauer, John, et al. \\\"A watermark for large language models.\\\"\\u00a0ICML, 2023.\\n\\n[2] Zhao, Xuandong, et al. \\\"Provable Robust Watermarking for AI-Generated Text.\\\"\\u00a0ICLR, 2024.\\n\\n[3] Liu, Aiwei, et al. \\\"An unforgeable publicly verifiable watermark for large language models.\\\"\\u00a0*ICLR,* 2023.\\n\\n[4] Lu, Yijian, et al. \\\"An Entropy-based Text Watermarking Detection Method.\\\"\\u00a0ACL, 2024.\\n\\n[5] Liu, Yepeng, and Yuheng Bu. \\\"Adaptive Text Watermark for Large Language Models.\\\" ICML, 2024.\\n\\n[6] Wu, Yihan, et al. \\\"A Resilient and Accessible Distribution-Preserving Watermark for Large Language Models.\\\"\\u00a0ICML, 2024.\\n\\n[7] Chang, Yupeng, et al. \\\"A survey on evaluation of large language models.\\\" TIST, 2024.\\n\\n[8] Jovanovi\\u0107, Nikola, Robin Staab, and Martin Vechev. \\\"Watermark Stealing in Large Language Models.\\\"\\u00a0ICML, 2024.\\n\\n[9] Panickssery, Arjun, Samuel R. Bowman, and Shi Feng. \\\"Llm evaluators recognize and favor their own generations.\\\"\\u00a02024.\\n\\n[10] Gao, Mingqi, et al. \\\"Llm-based nlg evaluation: Current status and challenges.\\\" 2024.\\n\\n[11] Chu, KuanChao, Yi-Pei Chen, and Hideki Nakayama. \\\"A Better LLM Evaluator for Text Generation: The Impact of Prompt Output Sequencing and Optimization.\\\" 2024.\\n\\n[12] Li, Zhen, et al. \\\"Leveraging large language models for nlg evaluation: Advances and challenges.\\\"EMNLP, 2024.\"}",
"{\"title\": \"Response to Reviewer RzfD\", \"comment\": [\"We appreciate the reviewer's feedback. Below, we address each concern in detail:\", \"**Theoretical Justification (W3):** We acknowledge the reviewer's request for additional theoretical justification. However, given the complexity and black-box nature of large language models (LLMs), we believe that the most convincing evidence for the efficacy of our attack is its empirical success.\", \"**Fair Comparison of Text Quality:** We recognize the broader challenges in defining and comparing text quality across different methods, a fundamental issue within the field of LLMs that extends beyond watermarking. In our revised version, we have incorporated two reasonable and widely used metrics, perplexity and LLM-as-a-Judge, to evaluate text quality.\", \"**Applicability Beyond the \\\"Green-Red List\\\" Watermarking Scheme:** Under our threat model, other watermarking approaches can be bypassed by sampling from the top-k probabilities, which not only avoids watermarked outputs but also preserves text quality. This is precisely why our evaluation focuses on the green-red list method.\"]}",
"{\"title\": \"Thank you for your response\", \"comment\": \"I really appreciate the authors' detailed response. Therefore, I keep my score positive.\"}",
"{\"summary\": \"In this paper, the authors introduce a novel watermark-removal attack that requires only a small watermark-free reference model. The attacker first estimates the probability of the generated token at position i being in the watermark's green list, which correlates with the relative confidence of the most likely token among the top k tokens. According to the confidence score, the attacker then combines the probability distributions at position i from both the watermarked model and the reference model to sample the token. This approach effectively evades watermark detection while maintaining high text quality.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I find the proposed method very interesting and quite different from the previous work. Meanwhile, the method doesn't require a strong oracle model like a paraphrasing attack, which makes the threat model more realistic.\", \"I really enjoy reading this paper, especially section 3.1, which gives readers a lot of insights.\", \"The results look positive and a lot of different watermarking schemes are covered (most results are presented in the appendix).\"], \"weaknesses\": [\"The proposed method relies on using the logits/output probabilities of the watermarked model. This might limit the attack to some API models that may not return the logits/probabilities or only return top-k probabilities or even calibrated probabilities.\", \"The paper uses perplexity or loss to measure the text quality, but I think it's not enough to show the quality of the text. For example, the model can generate an answer for a math question with a very low perplexity, but the answer is completely wrong. So, I think it will be more helpful if the authors can include more text quality metrics like P-SP used in [1] or even a model-based evaluation like asking a large oracle model which generation is preferable.\", \"I think it's also helpful to the paper if the answers can show the results under different data distributions instead of overall c4.\", \"[1] Kirchenbauer, J., Geiping, J., Wen, Y., Shu, M., Saifullah, K., Kong, K., Fernando, K., Saha, A., Goldblum, M., & Goldstein, T. (2023). On the Reliability of Watermarks for Large Language Models. ArXiv, abs/2306.04634.\"], \"questions\": [\"Can the authors provide a baseline that uses the local reference model to do the paraphrase attack?\", \"What could be potential adaptive defenses for this attack?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer a4FL -- Continued\", \"comment\": \"**Q2 (potential defenses)**:\\n> What could be potential adaptive defenses for this attack?\\n\\n\\n**Response:** We first recall the idea of our attack, and then present the idea of a potential defense.\", \"idea_of_our_attack\": \"Our attack leverages the correlation between the significance level of the watermark and the uncertainty of predicting the next token, based on which we query either the watermarked model or the reference model for generating the text token.\", \"idea_of_defense\": \"Hence, a natural idea to defend our attack is to *limit the attacker\\u2019s access* to the information on the significance level of the watermark. The more straightforward way to do that is to make the watermarked model only return the most likely token without other information or alternatively, return only a few of the most likely tokens and the associated token probabilities. This would cause the estimation of the *significance level of the watermark* to be less accurate, affecting the attacker\\u2019s decision on when to use the reference model or the watermarked model, ultimately affecting the quality of the generated text and/or leaving some watermark traces in it.\\n\\nHowever, this defense may ***not be practical to deploy***. As an example, some existing LLM services return the probabilities of the most likely tokens (e.g., Open AI\\u2019s API returns the top-20 probabilities and the associated tokens), which already provide enough information to run our attack. Besides, limiting a user\\u2019s access to such information may do more harm than good, as this information may be crucial for good user experience (e.g., output customization, explainability, debugging, interpretability, evaluation, and monitoring). Therefore, such information is often accessible to a user and; hence, also to our attack.\\n\\nThus, the effectiveness of our attack (there does not yet exist a practical defense) exemplifies the need for new watermark techniques. We have included a discussion on this limitation in the revised version (see Appendix A.7).\\n\\n\\n----\\n[1] Kirchenbauer, John, et al. \\\"A watermark for large language models.\\\" ICML, 2023.\\n\\n[2] Zhao, Xuandong, et al. \\\"Provable Robust Watermarking for AI-Generated Text.\\\" ICLR, 2024.\\n\\n[3] Liu, Aiwei, et al. \\\"An unforgeable publicly verifiable watermark for large language models.\\\" ICLR, 2023.\\n\\n[4] Lu, Yijian, et al. \\\"An Entropy-based Text Watermarking Detection Method.\\\" ACL, 2024.\\n\\n[5] Liu, Yepeng, and Yuheng Bu. \\\"Adaptive Text Watermark for Large Language Models.\\\" ICML, 2024.\"}",
"{\"summary\": \"This work develops a smooth attack in the \\u201cgreen-red list\\u201d watermarking framework. The paper shows that a smooth attack makes it easier to bypass the detector while still preserving the quality of the text.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Many existing methods for statistical watermarking have primarily concentrated on the generation and detection of watermarks. This paper takes a different approach by examining statistical watermarking from a new perspective. This perspective is interesting and may also aid in the development of improved watermark generation and detection techniques.\", \"weaknesses\": \"1. The significance level $S_t$ is unobserved and was estimated using a surrogate quantity, $c_t$. Though the authors showed that there is generally a negative correlation between $c_t$ and $S_t$, this is only a weak justification. It is possible that a small $c_t$ would correspond to a large $S_t$ in some situations, e.g., when $K$ is small.\\n2. The method only applies to the \\u201cgreen-red list\\u201d watermarking scheme, which is known to be biased because it does not preserve the original text distribution. In contrast, there are unbiased watermarking methods (e.g., Kuditipudi et al., 2023; Aaronson, 2023). It is unclear if the proposed method applies to unbiased watermarking schemes. Perhaps the authors can provide more discussions about how their method might be adapted or extended to work with unbiased watermarking schemes.\\n3. The paper lacks a rigorous theoretical analysis of the effect of the smooth attack on the text quality, e.g., bounds on how much the smoothing attack can affect certain text quality metrics.\", \"questions\": \"1. In Table 1, Watermark (smoothing) has a lower perplexity than Watermark (or even Unwatermark) in some cases (e.g., Llama2-7b). In other words, the attack can even improve the quality of the text, which seems counterintuitive as the reference model is weaker. This also raises a concern about whether perplexity is the right measure to look at the quality of a text here. The authors may want to include other text quality metrics in the numerical studies.\\n2. I would like to know if the authors can discuss the potential pitfalls of their methods, e.g., provide concrete examples or scenarios where their smooth attack might fail, and discuss the implications of such failures\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I thank the authors for responding to my questions; however, some of my concerns remain unresolved. Firstly, the authors have not provided a convincing conceptual (W1) and theoretical (W3) justification for their scheme. Second, it is difficult to make a fair comparison of the quality of the texts produced by different methods. The newly added results based on GPT-4 would clearly favor texts similar to those produced by GPT-4 (but does it really mean the text has a higher quality?) Lastly, the fact that the method is only applicable to the \\\"green-red list\\\" watermarking scheme limits its overall scope.\"}",
"{\"comment\": \"I have read the revised paper; the authors have addressed my main concerns re: related work, and I have updated my score accordingly. One lingering point I do not find convincing is the authors' justification for why they focus only on attacking the red-green list watermark of Kirchenbauer et al. versus other distortion-free watermarks that do not change the token distribution. In practice the top-k probabilities may not be available, in which case it is not clear whether the methods generalize to distortion-free watermarks (in particular, are they still more effective than Zhang et al., who do attack these watermarks?).\"}",
"{\"title\": \"Justification on our focus\", \"comment\": \"Thank you for your response.\\n\\nWe believe there is a misunderstanding.\\n\\nFirst, our assumption easily holds in practical settings--top-k probabilities and the corresponding tokens are available on OpenAI API. Distortion-free watermark schemes, under this setting, are vulnerable to the adversary, who can sample directly from the watermark-free token distribution. Therefore, attacking distortion-free watermarks under our setting is not an interesting/challenging task.\\n\\nSecond, although Zhang et al. 's attack does not rely on access to the top-k probabilities, it relies on something that is much more powerful than this information--the attack uses a strong perturbation model that is sometimes larger than the target model (to paraphrase the target text) and an evaluation model to score the paraphrased candidates. Our attack, on the other hand, works well even when it uses a much smaller reference model (e.g., we use TinyLlama-1.3B to attack Llama-7B).\\n\\nWe do not claim that our attack will always be better than that of Zhang et al. in all settings. Instead, we claim that, in our threat models, where the adversary has access to the top-k probabilities and the corresponding tokens, we can do better than theirs (although they rely on even stronger models to run their attack). Therefore, the success of our attack conveys a stronger message than theirs. That is, existing statistical watermark schemes are vulnerable to adversaries in practice.\"}",
"{\"title\": \"Clarification of threat model\", \"comment\": \"Reviewers (Reviewer NfPb, Reviewer a4FL) raised concerns about the access of the adversary to the target model.\\nIn our attack, the adversary only has access to the top-K probabilities and the corresponding tokens (with K=20) from the target model at each token position instead of logits/probabilities over all the tokens.\\n\\nWe would like to note that this is a reasonable assumption for the adversary, who could be a user of the target LLM, as common LLM service providers often grant their users access to such information through APIs, e.g., OpenAI\\u2019s API, for better customization, interpretability, traceability, and ultimately, better user experience.\"}",
"{\"title\": \"Comparision with Zhang et al. (2023)\", \"comment\": \"> What are the main differences between this work and that of Zhang et al. (2023)? (see Weaknesses section)\\n\\nThe main differences lie in the adversary\\u2019s capabilities and the cost of executing the attacks. Our approach imposes *fewer constraints* on the adversary\\u2019s capabilities and is *less expensive* to run. Moreover, our attack is *more effective* at removing watermarks.\\n\\n1. Adversary's capability:\\n - In our attack, we use a weaker model of the same type as the target watermarked model, which captures the capability of an adversary in practice. For example, we use TinyLlama-1.3b to attack Llama2-7b.\\n - The attack by Zhang et al. (2023) demands a more capable adversary, who has access to a **perturbation oracle model** (which generates a candidate for the given watermarked text) and a **quality oracle model** (which assigns a score for the candidate output by the perturbation model). When attacking Llama2-7b, Zhang et al. (2023) use T5-XL v1.1 of 2.8b as the perturbation model and RoBERTa-v3 large of 335M as the oracle model. Overall, their adversary is stronger than ours.\\n2. Cost:\\n 1. Our attack also requires fewer computation resources, as it makes fewer queries to the reference model. In particular, we query the reference model (e.g., TinyLlama-1.3b) *only when* the entropy for predicting the current token is high, and stops querying after producing the final token. The outcome is a *single candidate* for the prompt. \\n 2. As a comparison, Zhang et al. \\u2018s perturbation oracle model (e.g., T5-XL v1.1 of 2.8b) generates *multiple candidates* (e.g., 200 candidates) for one prompt, leading to much larger computational costs. \\n 3. We have benchmarked the attack costs on the KGW watermark using two NVIDIA TITAN RTX GPUs (24GB)\\u2014our attack takes around 30 seconds whereas Zhang et al.\\u2019s takes around 800 seconds (under the default setting in their paper).\\n3. Effectiveness:\\n 1. We have run some preliminary experiments to confirm this\\u2014our attack achieves a z-score of -0.0731, while Zhang et al.\\u2019s only achieves a z-score of 0.3747 (a smaller z-score indicates better performance in watermark removal). Hence, our attack is more effective under this metric. This comparison is based on the KGW watermarks on the OPT-2.7b model. \\n 2. In our original draft, we have already included a competitor that seems stronger than Zhang et al.\\u2019s attack\\u2014the paraphrase attack using GPT-3.5 (175B parameters). Notably, our attack achieves lower z-scores than this paraphrasing attack (see page 8 table 1), which, in turn, implies that our attack is better than Zhang et al.\\u2019s in removing the watermarks.\\n\\nOverall, our attack is more practical, efficient, and effective. In other words, our attack reveals more vulnerability to the existing statistical watermarks (we do not even need a strong adversary to break the watermarks). We have also included the discussion in the revised paper (See Appendix A.5).\"}"
]
} |
1ABhAZCoGr | DYSTIL: Dynamic Strategy Induction with Large Language Models for Reinforcement Learning | [
"Borui Wang",
"Kathleen McKeown",
"Rex Ying"
] | Reinforcement learning from expert demonstrations has long remained a challenging research problem, and existing methods resorting to behavioral cloning plus further RL training often suffer from poor generalization, low sample efficiency, and poor model interpretability. Inspired by the strong reasoning abilities of large language models (LLMs), we propose a novel strategy-based neuro-symbolic reinforcement learning framework integrated with LLMs called DYnamic STrategy Induction with Llms for reinforcement learning (DYSTIL) to overcome these limitations. DYSTIL dynamically queries a strategy-generating LLM to induce textual strategies based on advantage estimations and expert demonstrations, and gradually internalizes induced strategies into the RL agent through policy optimization to improve its performance through boosting policy generalization and enhancing sample efficiency. It also provides a direct textual channel to observe and interpret the evolution of the policy's underlying strategies during training. We test DYSTIL over challenging RL environments from Minigrid and BabyAI, and empirically demonstrate that DYSTIL significantly outperforms state-of-the-art baseline methods by 17.75% success rate on average while also enjoying higher sample efficiency during the learning process. | [
"Neurosymbolic Systems",
"Reinforcement Learning",
"Large Language Models",
"Strategy"
] | Reject | https://openreview.net/pdf?id=1ABhAZCoGr | https://openreview.net/forum?id=1ABhAZCoGr | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"mKvVkyxQHv",
"kaTPloQW6j",
"cVWYZGDDGk",
"YjDwfTf6FL",
"YSg4s7dHBU",
"Rd2PVmDg0a",
"NygA80NDHk",
"JsB78Py73N",
"3GZLcXFkWb"
],
"note_type": [
"official_comment",
"decision",
"official_review",
"official_comment",
"official_review",
"meta_review",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732859674035,
1737523934307,
1730704321756,
1732858429076,
1730505427258,
1734759238749,
1730750204848,
1732858275031,
1732851053995
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8819/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8819/Reviewer_i2kJ"
],
[
"ICLR.cc/2025/Conference/Submission8819/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8819/Reviewer_QBcw"
],
[
"ICLR.cc/2025/Conference/Submission8819/Area_Chair_uQ8x"
],
[
"ICLR.cc/2025/Conference/Submission8819/Reviewer_eNLJ"
],
[
"ICLR.cc/2025/Conference/Submission8819/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8819/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer i2kJ\", \"comment\": \"Thank you very much for your feedback and suggestions in the review! Below are our responses to your review:\\n\\n---\\n> My biggest concern is the limited novelty and experiments. There are many papers that proposed strategy-generating methods as a summary or reflection of the past trajectories, such as [1]. The authors failed to discuss the similarities and differences between their method and these works.\\n\\nIn fact, we have already clearly discussed the similarities and differences between our proposed DYSTIL method and Reflexion [1] as well as other works in applying LLMs to sequential decision making in Line 497-509 of our paper manuscript:\\n\\n> \\\"Recently there has been a series of works that explore different approaches for applying LLMs to sequential decision making tasks (Yao et al., 2023; Shinn et al., 2023; Zhao et al., 2024; Yao et al., 2024). All these existing methods have two major limitations: (1) they all require querying the API of a large-scale closed-source LLM for the agent\\u2019s decision making at every single time step, which make them highly infeasible for many important real-world tasks and applications that require fast inference speed to make timely decisions or require offline and lightweight deployment to integrate with operating hardware (such as robots); (2) they all rely on prompting to make inference of action decisions with frozen closed-source LLMs at every single time step, and thus do not support parametrized policy learning. In contrast, for DYSTIL the decision making inference at all time steps is run on a lightweight open-source LLM that supports full model parameter tuning. As a result, DYSTIL has the advantage of fast real-time inference during decision making, easy deployment over different application scenarios, and compatibility with on-policy reinforcement learning algorithms, while still being able to learn high-level strategies through strategy distillation from large-scale closed-source LLMs.\\\"\\n\\n---\\n> Whether this approach can generalize and how to design each component for different tasks remains unclear.\\n\\nAll the steps and procedures of our proposed DYSTIL method are clearly described in Section 2 of the paper in a very general manner without restricting them to any particular RL tasks or environments. Therefore, our proposed DYSTIL method is widely applicable to different RL tasks. In its design it was not specifically tailored to any particular RL tasks. For different RL tasks, you should follow the same procedures and principles as detailed in the paper to design each component of DYSTIL.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper proposed DYSTIL which integrates LLMs into a strategy-based neuro-symbolic reinforcement learning framework. The method aims to address the generalization issue of RL.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The authors provide illustrations of their method, which makes it clear to how it works.\", \"weaknesses\": \"1. My biggest concern is the limited novelty and experiments. There are many papers that proposed strategy-generating methods as a summary or reflection of the past trajectories, such as [1]. The authors failed to discuss the similarities and differences between their method and these works.\\n2. The experiments are only conducted in several environments from Minigrid. Whether this approach can generalize and how to design each component for different tasks remains unclear. Besides, the compared baselines are limited. I strongly encourage the authors to do literature reviews and add more baselines such as [1].\\n\\n[1] Shinn et al., Reflexion: Language Agents with Verbal Reinforcement Learning.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer eNLJ - Part 2\", \"comment\": \"---\\n> The word \\\"neuro-symbolic\\\" is used to characterize the method, but is it really a neuro-symbolic method?\\n\\nYes, our DYSTIL method is indeed a very typical neuro-symbolic method. As illustrated in Figure 1 of the paper, our proposed DYSTIL RL agent has both (1) neural components, which includes its core reasoning LLM, language modeling head, and value network; and (2) a symbolic component, which is the list of strategies in the form of natural language texts. During the reinforcement learning process, DYSTIL repeatedly alternates between (1) performing explicit reasoning over symbolic rules (i.e. the list of strategies) given newly collected empirical evidence from the environment to try to improve them, and (2) performing parametrized policy optimization on the neural components of the DYSTIL RL agent in order to internalize updated strategies (symbolic rules). This makes DYSTIL a very typical neuro-symbolic method, and perfectly justifies its characterization as a neuro-symbolic method.\"}",
"{\"summary\": \"The paper introduces DYSTIL, a neuro-symbolic reinforcement learning framework that integrates large language models (LLMs) to dynamically induce and internalize textual strategies, enhancing policy generalization and sample efficiency. By leveraging LLMs to provide strategy guidance based on expert demonstrations, DYSTIL improves interpretability and transparency in reinforcement learning tasks. Empirical results demonstrate that DYSTIL outperforms state-of-the-art methods by 17.75% in success rate across complex environments like Minigrid and BabyAI.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Quality: The paper presents ideas through clear text and figures, aiding understanding of the overall concepts.\", \"Significance: This paper demonstrates that the proposed method outperforms two baselines in grid-world environments.\"], \"weaknesses\": [\"Scalability: DYSTIL\\u2019s reliance on closed-source, SOTA LLMs (e.g., GPT-4o) raises issues of scalability, reproducibility, and accessibility, especially for the model which needs to recurrently call strategy-generating LLM for each iteration. The paper also lacks ablation studies using different LLMs, which would help clarify the flexibility of using other LLMs for this work.\"], \"questions\": \"1. Can DYSTIL generalize to other language-based decision-making tasks, such as those solved by ReAct (e.g., ALFWorld)? How could you extend your framework to accommodate these tasks?\\n2. In the GLAM baseline paper[1], the average success rate converges and reaches high performance (i.e., over 80%) at approximately 1e6 steps. Is there a reason you chose 1e5 steps for evaluation? What causes the discrepancy between your configuration and results compared to theirs?\\n3. In the ablation study, dynamic strategy updates are removed, so there is no $\\\\mathcal{L}_2$ in the static strategy settings. Does this result in more iterations compared to the proposed method based on the same training frames? I also want to confirm whether $\\\\mathcal{L}, \\\\mathcal{L}_1, \\\\mathcal{L}_2$'s executions are all counted in training frames.\\n4. Can the strategies generalize to novel tasks? For instance, would training on Unlock Pickup help in solving Key Corridor?\\n\\n[1] Carta et. al. \\\"Grounding large language models in interactive environments with online reinforcement learning\\\". In Proceedings of ICLR 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper presents a neuro-symbolic reinforcement learning framework that integrates LLMs to generate and update strategies for RL agents. The framework aims to improve policy generalization and sample efficiency while maintaining interpretability. Dispite of the interesting integration of LLMs with RL, reviewers raised concerns on the limited novelty compared to existing strategy-generating methods, the scalability concerns with closed-source LLM dependency, the limited experimental scope, and the insufficient statistical validation. The authors are encouraged to improve this work from these aspects.\", \"additional_comments_on_reviewer_discussion\": \"The discussion revealed a gap between the authors' view of their contribution and the reviewers' assessment, with limited productive dialogue after the initial rebuttals.\"}",
"{\"summary\": \"The paper proposes a way to leverage expert data for behavior cloning and RL to craft policies that are conditioned on a set of strategies which hopefully encode generalizable behavior. The authors claim that existing methods on BC+RL suffer from important issues (poor generalization, sample efficiency and interpretability), which the proposed approach can address. In particular, the authors train an open source LLM on expert data through behavior cloning by conditioning the policy on strategies that are devised by a teacher LLM (GPT-4o). The model is then trained with RL data, leading to a new list of strategies, which is then used to further guide the agent. The strategies are selected by verifying whether they help the model achieve higher performance. On a set of four environments, the paper shows that the proposed approach improves upon previous baselines.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper identifies an important area for research, that is, how to combine expert data and reinforcement learning. The proposed approach is different from some of the more traditional ways of leveraging both kinds of data, building on the strengths of LLMs to devise high level strategies.\", \"weaknesses\": \"The empirical setup brings a lot of questions. A major red flag is that there are no error bars at all and no mention of the number of seeds. Please see the rich literature on the subject of statistical relevance in decision making [1, 2]. For the tasks themselves, it is not clear why some choices are made. For example, is the max_steps=60 the default number? In the codebase of Minigrid I can see that the default value is set of 100, so an explanation would be necessary.\\n\\nAnother important area of doubt is concerning the strategy for updating the list of strategies. Currently, this is a complicated method that relies on evolving the strategy list with respect to performance. Why is such a complex method used? How sensitive is it to the different hyperparameters? How does it compare to simply asking GPT-4o for a new list of strategies? These are key questions that are completely unanswered.\\n\\nThe authors claim that generalization is a limitation of BC+RL, yet the paper does not show any experiments on generalization. This would be a great opportunity for the authors to show the compositionally that is afforded by language. It would also be a great opportunity to address another important are of concern: how much does the list of strategies affect the model? How much can you steer its behavior by changing the list? At the moment, it really isn't clear that the RL agent really responds to this conditioning.\\n\\nThe performance numbers reported for some of these tasks seems very low, which also comes from a limited set of baselines. In particular, I would really like to see the performance of GPT-4o on these tasks. Another family of baselines would be to compare to LLMs generating goals for the RL agent [3], which is relatively close to the ideas presented here. Notice that in that paper the results are significantly better than the numbers presented here.\", \"questions\": \"In the introduction, it is mentioned that BC+RL can't enable an RL agent tot acquire higher level abstractions and understanding of the RL task. This is not only very loosely defined, but likely not true. Do the authors mean that an RL agent wouldn't acquire an understanding that can be translated in language? This is very different than the claims being made.\\n\\nWhy use GPT-4o for generating strategies? How does Llama 3.1 compare? It would be much more preferable to have it be the same family of models.\\n\\nThe word \\\"neuro-symbolic\\\" is used to characterize the method, but is it really a neuro-symbolic method? To me it just seems like the neural network is additionally conditioned on language. This qualifier seems a bit of stretch.\\n\\n[1] Deep Reinforcement Learning at the Edge of the Statistical Precipice, Agrawal et al., 2022\\n\\n[2] Deep reinforcement learning that matter, Henderson et al., 2018\\n\\n[3] Code as Reward: Empowering Reinforcement Learning with VLMs, Venuto et al., 2024\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer eNLJ - Part 1\", \"comment\": \"Thank you very much for your feedback and suggestions in the review! Below are our responses to your review:\\n\\n---\\n> For the tasks themselves, it is not clear why some choices are made. For example, is the max_steps=60 the default number?\\n\\nBoth Minigrid and BabyAI are highly flexible and modular RL env libraries that support researchers to customize RL testing environments that they see most fit to their specific research goals and purposes in their projects. Therefore, Minigrid and BabyAI allows researchers to set max_steps to any integer they want. Essentially, this max_steps parameter is used to control the relative difficulty of each task, and the numerical scores of an agent\\u2019s performance will also be affected as reward value is computed as \\u20181 \\u2212 0.9 \\u00d7 (total_steps/max_steps)\\u2019. In our experiment we choose to set max_steps = 60 for \\u2018Unlock Pickup\\u2019, \\u2018Key Corridor\\u2019, \\u2018Put Next\\u2019 in order to: (1) increase the difficulty of these tasks so that our evaluation metrics (especially the success rate) can be better used to gauge the reasoning and planning abilities of agents; (2) speed up the RL training process.\\n\\n---\\n> Another important area of doubt is concerning the strategy for updating the list of strategies. Currently, this is a complicated method that relies on evolving the strategy list with respect to performance. Why is such a complex method used?\\n\\n\\nWe don't agree that the strategy update method of DYSTIL is 'complex' or 'complicated'. On the contrary, we have already followed a minimalistic approach when designing the strategy-updating procedures in DYSTIL, and all the steps are necessary and serving very important purposes as explained in our paper. In fact, the reason why we design to use advantage estimates to help guide the strategy-generating LLM to generate new list of strategies instead of simply \\u2018asking GPT-4o for a new list of strategies\\u2019 has already been clearly explained in Line 266-275 of the paper manuscript:\\n\\n> \\\"One important limitation of existing methods for rule induction with LLMs for sequential decision making tasks is the lack of a credit assignment mechanism that can clearly inform the LLMs which specific action decisions are mainly responsible for the eventual success or failure of different trajectories (Zhao et al., 2024), thus significantly limiting their reasoning ability to analyze how to best adjust its induced rules to correct unfavorable action decisions. In reinforcement learning, estimation of the advantage function (Sutton et al., 1999; Schulman et al., 2016) is a commonly used technique for solving the credit assignment problem. So in DYSTIL, we use the advantage estimates calculated in the previous step to filter out the most suspiciously problematic (observation, action) pairs that could contribute the most to low episode returns, and to help the strategy-generating LLM to efficiently discern which strategy items need revision and update.\\\"\\n\\nIn contrast, simply \\u2018asking GPT-4o for a new list of strategies\\u2019 would be aimless and thus highly inefficient and completely relying on luck.\\n\\n---\\n> It would also be a great opportunity to address another important are of concern: how much does the list of strategies affect the model? How much can you steer its behavior by changing the list? At the moment, it really isn't clear that the RL agent really responds to this conditioning.\\n\\nAccording to our design of the DYSTIL framework, we do not expect a new list of strategies newly induced by the strategy-generating LLM to be immediately reflected in the RL agent\\u2019s policy and behavior. Instead, according to our design of DYSTIL, any newly induced list of strategies will need to be gradually learned and internalized by the RL agent in the PPO optimization steps that follow the injection of the new list of strategies in order to be mastered by the RL agent.\\n\\n---\\n> The performance numbers reported for some of these tasks seems very low\\n\\nThe numerical values of the performance numbers reported for some tasks seems \\u2018low\\u2019 because: (1) these tasks are by design inherently more difficult than other simpler tasks in the libraries; (2) we purposefully set \\u2018max_steps = 60\\u2019 as explained earlier, and thus the performance numbers\\u2019 numerical values will naturally be lower according to the way the reward is calculated in Minigrid and BabyAI environments, which is very reasonable and understandable. Also, in our experiment results, what really matters is the relative comparison of the performance numbers of different methods, not their absolute numerical values.\"}",
"{\"title\": \"Response to Reviewer QBcw\", \"comment\": \"Thank you very much for your feedback and suggestions in the review! Below are our responses to your review:\\n\\n---\\n\\n> - Scalability: DYSTIL\\u2019s reliance on closed-source, SOTA LLMs (e.g., GPT-4o) raises issues of scalability, reproducibility, and accessibility, especially for the model which needs to recurrently call strategy-generating LLM for each iteration. \\n\\n\\nNowadays there are many papers that propose new methods that rely on recurrently calling and querying closed-source and SOTA LLMs (such as [R1]), which is a perfectly fine and legitimate approach that is very widely accepted and adopted in today\\u2019s AI research community. Many of these papers\\u2019 proposed methods even rely on querying closed-source and SOTA LLMs for making decisions at every single time step, while in contrast our DYSTIL method only requires querying a closed-source and SOTA LLM once every epoch. Therefore, DYSTIL is already performing much fewer callings of closed-source and SOTA LLMs than many existing works in the domain.\\n\\n---\\n> 1. Can DYSTIL generalize to other language-based decision-making tasks, such as those solved by ReAct (e.g., ALFWorld)?\\n\\nDYSTIL is designed as a reinforcement learning framework, so it should be applied to reinforcement learning tasks.\\n\\n---\\n> 2. In the GLAM baseline paper[1], the average success rate converges and reaches high performance (i.e., over 80%) at approximately 1e6 steps. Is there a reason you chose 1e5 steps for evaluation? What causes the discrepancy between your configuration and results compared to theirs?\\n\\nThe reason why our RL training takes fewer training time steps than the original RL training in the GLAM baseline paper is because in our experiment we have access to a set of expert demonstration trajectories. This setup is because, as described in the Problem Formulation paragraph of Section 2.1, in this paper we target the problem of \\u2018reinforcement learning from expert demonstration\\u2019. Therefore, the RL algorithms are not trained from scratch, but are initialized by a checkpoint that we obtain from first running behavioral cloning over the set of expert demonstration trajectories. That\\u2019s why the following RL training is much more efficient and takes fewer training time steps than that in the GLAM baseline paper.\\n\\n---\\n> 3. In the ablation study, dynamic strategy updates are removed, so there is no $\\\\mathcal{L}_2$ in the static strategy settings. Does this result in more iterations compared to the proposed method based on the same training frames? I also want to confirm whether $\\\\mathcal{L}$, $\\\\mathcal{L}_1$, $\\\\mathcal{L}_2$'s executions are all counted in training frames.\\n\\nIn the counting of training frames, we only count $\\\\mathcal{L}$\\u2019s policy execution trajectories as these frames are the actual frames that are collected into the experience buffer and used as training data to update model parameters during each training epoch. Only these frames executed by $\\\\mathcal{L}$ are meaningful to be counted when comparing sample efficiency of RL algorithms. In the ablation study, after the dynamic strategy updates are removed, there are actually less computation steps in each epoch as there will be no $\\\\mathcal{L}_2$, and thus no test-evaluate-compare-select procedure in each epoch for this ablated method.\\n\\n---\\n> 4. Can the strategies generalize to novel tasks? For instance, would training on Unlock Pickup help in solving Key Corridor?\\n\\nCurrently our DYSTIL framework is designed to be task-specific, as different RL tasks often tend to require different strategies. But your suggestion is very nice and constructive, and in future work we can explore the possibilities of extending our method to also enable inter-task strategy generalization to a certain extent.\\n\\n---\\n**References**\\n\\n- [R1] Yao et al. Retroformer: Retrospective large language agents with policy gradient optimization. In The Twelfth International Conference on Learning Representations, 2024\"}"
]
} |
19ufhreGTj | Understanding Dimensional Collapse in Cross-Modal Feature Distillation | [
"Dae Ung Jo",
"Sujin Jang",
"Jay Heo",
"Sung Ju Hwang"
] | To overcome limited computing resources and the complexity of sensor configurations in deploying multi-modal neural networks in real-world applications, cross-modal knowledge distillation (CMKD) aims to transfer valuable information from a pretrained teacher model to a deployable student model with the target modality. Despite the successful applications of CMKD in various fields, our understanding of knowledge transfer across different modalities remains insufficient to fully explain the efficacy of feature distillation. In this work, we investigate the relationship between the distributional shifts across modalities, referred to as the modality gap, and its impact on the effectiveness of CMKD, particularly focusing on the problem of cross-modal feature distillation. We first hypothesize and empirically validate that the modality gap between the teacher and student causes dimensional collapse in the student's feature space. To prevent such inefficiency, we propose a Cross-modal Information Bottleneck Approximation (CIBA) scheme aimed at extracting and transferring modality-general features from the teacher model. Lastly, we experimentally demonstrate that our distillation strategy effectively reduces the dimensional collapse in the student model, thereby achieving improved performance for various real-world multi-modal datasets. | [
"knowledge distillation",
"feature distillation",
"cross-modal learning",
"dimensional collapse"
] | Reject | https://openreview.net/pdf?id=19ufhreGTj | https://openreview.net/forum?id=19ufhreGTj | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"z3uIWZD2ns",
"ygQ2qKudqV",
"yITLUiBOHW",
"t7db1042ui",
"rlxWFiqyuX",
"pEU9dKWOzO",
"mUtrBSBfBj",
"lNlQxLiU3G",
"lDCwWtpKx7",
"fuku09cAkC",
"fVFsySfFCU",
"b9vbXqDRRI",
"aTtdmTYVJq",
"Uuavx6X4EZ",
"SMrQPsAoUl",
"OSmOXrqbYO",
"HtQXVQzjT8",
"GHtZvsxXX4",
"FuUTEEYFGJ",
"ET8GVxGGrB",
"AyVLL9eOOf",
"ARnaK7REEf",
"7jUPVGZUlB",
"4j7LDZQCEj",
"3Kd6vBbgvA",
"2zc52EjjML",
"2alH5HPmTS",
"2T3ZkZrnkc",
"0Zls3BxImk"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732632825965,
1733184931546,
1732632785595,
1734698713198,
1732634863470,
1730800691579,
1732507986789,
1732632990654,
1732691420581,
1732278882108,
1730538583064,
1737523690223,
1732278625784,
1730787627777,
1733198149677,
1732278160459,
1732278471772,
1733057077491,
1732278513490,
1732630972731,
1732340464299,
1730993460283,
1732278284655,
1732278862191,
1730045054070,
1733114993587,
1732278037138,
1732278564545,
1732705714774
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_XgGs"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Area_Chair_9Ror"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_XgGs"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_nuh2"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_nuh2"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_AFwU"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_9LAr"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_i91g"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_9LAr"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_i91g"
],
[
"ICLR.cc/2025/Conference/Submission5189/Reviewer_AFwU"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5189/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Official Comment by Reviewer XgGs\", \"comment\": \"Thank you for your response. I'd like to update the score accordingly.\"}",
"{\"comment\": \"We appreciate your recognition of our contributions and your decision to maintain the acceptance score. Thank you again for your careful consideration and the time and effort you have dedicated to this review.\"}",
"{\"comment\": \"We sincerely appreciate your recognition of the contributions of our work and your decision to maintain the acceptance score. Thank you again for your careful consideration and the time and effort you have dedicated to this review.\"}",
"{\"metareview\": \"The submission investigates the issue of dimensional collapse in cross-modal knowledge distillation (CMKD) caused by the modality gap between teacher and student models. The authors propose the Cross-modal Information Bottleneck Approximation (CIBA) framework to mitigate this issue by disentangling modality-general and modality-specific features. While the paper is well-written and supported by extensive experiments, the novelty and contributions are deemed insufficiently compelling for acceptance at this stage.\\n\\nRelated work in modality gap analysis and logit-level distillation (e.g., C2KD) is insufficiently addressed, and comparisons to state-of-the-art techniques are either missing or not directly relevant. Theoretical analyses are based on linear feature extractors, which may not generalize to the non-linear encoders used in practical applications. Several reviewers (e.g., XgGs, i91g, AFwU) retained their scores near the acceptance threshold, citing unresolved concerns about novelty, experimental robustness, and comparisons to advanced baselines.\", \"additional_comments_on_reviewer_discussion\": \"Multiple reviewers (9LAr, XgGs, i91g) highlighted that the primary contributions rely on established methods such as the Information Bottleneck (IB) framework and a combination of generation loss and KL loss.\"}",
"{\"comment\": \"Thank you for your response and for increasing your rating (from 3 to 5).\\n\\nWe regret that your decision is marginally below acceptance. To facilitate a constructive discussion, we kindly request that you further elaborate on any remaining concerns that make you hesitant to rate our work with a clear acceptance. We believe this would greatly assist us in improving our work, and we are always ready to address any additional concerns you may have.\\n\\nWe sincerely appreciate your careful consideration and the time and effort you have dedicated to this review again.\"}",
"{\"summary\": \"The paper titled \\\"Understanding Dimensional Collapse in Cross-Modal Feature Distillation\\\" investigates the challenges of transferring knowledge across different modalities in multi-modal neural networks, specifically focusing on the problem of dimensional collapse in cross-modal feature distillation (CMFD). The authors hypothesize that the modality gap between the teacher and student models leads to dimensional collapse in the student's feature space, which degrades the quality of knowledge distillation. To address this, they propose a novel framework called Cross-modal Information Bottleneck Approximation (CIBA), which aims to extract and transfer modality-general features from the teacher model to sub-dimensions of the student model's features. The paper empirically demonstrates that CIBA effectively reduces dimensional collapse and improves performance on various real-world multi-modal datasets, including RAVDESS (Audio-Image), MM-IMDB (Image-Text), and nuScenes (LiDAR-Camera). The key contributions of the paper are the theoretical and empirical investigation of the modality gap's impact on CMKD, the proposal of the CIBA framework, and the validation of its effectiveness across different modalities.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper proposes a novel cross-modal knowledge distillation (CMKD) method by focusing on the issue of dimensional collapse in cross-modal feature distillation. The concept of modality gap and its impact on the efficacy of feature distillation is a fresh approach to understanding the limitations of CMKD. The proposal of the Cross-modal Information Bottleneck Approximation (CIBA) scheme is creative and addresses a significant problem in transferring knowledge across different modalities.\\n\\nThe paper is well-written. The figures and tables are clear and effectively support the textual content.\", \"weaknesses\": \"The contribution is incremental.\\n\\nI feel the information bottleneck approximation idea has been used extensively.\\n\\nWhile the proposed method is shown to outperform baseline approaches, it is unclear how it compares to the most recent and advanced techniques in the field, i.e., DML[1], DKD[2], DIST[3], C2KD[4].\\n\\n[1] Ying Zhang, Tao Xiang, Timothy M. Hospedales, and Huchuan Lu. Deep mutual learning. In CVPR, 2018. 1, 3, 6, 7.\\n\\n[2] Borui Zhao, Quan Cui, Renjie Song, Yiyu Qiu, and Jiajun Liang. Decoupled knowledge distillation. In CVPR, 2022. 1, 2, 3, 4, 6, 7.\\n\\n[3] Tao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu. Knowledge distillation from a stronger teacher. In NeurIPS, 2022. 1, 2, 6, 7, 8.\\n\\n[4] Huo F, Xu W, Guo J, et al. C2KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 16006-16015.\", \"questions\": \"How does the performance of CIBA compare when the assumption of orthogonality between modality-general and modality-specific features is relaxed?\\n\\nHow does CIBA differ from the previous CMKD approaches?\\n\\nHow sensitive is the CIBA framework to the choice of hyperparameters, particularly the dimension of the bottleneck feature (H)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Respectful Request for Rebuttal Feedback\", \"comment\": \"Dear All Reviewers,\\n\\nWe sincerely appreciate the time and effort that you have dedicated to reviewing our paper. Considering the significance of the discussion phase, we would like to ask if you could spare some time to review our rebuttal responses provided in the global and individual comments.\\n\\nPlease let us know if you have any areas needing clarification or follow-up questions. Your expertise and feedback are valuable to us, and we are fully prepared to handle any further questions and provide the necessary clarifications.\\n\\nWe sincerely hope that this discussion will help address the reviewers' remaining concerns and lead to an improved evaluation of our work.\\n\\nBest regards, \\\\\\nThe authors.\"}",
"{\"comment\": \"Thank you for your positive feedback and for supporting our work by increasing your rating towards acceptance (from 5 to 6). We are glad to hear that your main concerns have been resolved. We sincerely appreciate your careful consideration and the time and effort you have dedicated to this review again.\"}",
"{\"title\": \"Thanks for your comment!\", \"comment\": \"Thanks for your response! I maintain the original score.\"}",
"{\"comment\": \"## **Additional Related Works**\\n\\nAs we mentioned in the introduction (L45-L49), our paper **primarily explores how the modality gap in cross-modal settings can lead to dimensional collapse in the student model**. Consequently, our main focus is on addressing cross-modal **\\\"feature\\\"** distillation, where dimensional collapse can severely degrade the distillation quality.\\nAt the same time, we acknowledge the importance of logit-level (output-level) knowledge distillation, as both logit- and feature-level KD can offer complementary insights to synergistically address the modality gap problem. This represents an intriguing direction for future work, as discussed in Section 6 (Discussion and Conclusion).\\nIn response to the reviewers' feedback, we have introduced additional concurrent literature and drawn comparisons in the context of the modality gap in Section 2.1 of the revised document.\\n\\n \\n\\n## **Details of Figure 1**\\n\\nAs you mentioned, **Fig.1 effectively illustrates the motivation behind our work**. In response to your feedback, we have included additional details for Fig.1 in Appendix D.6 of the revised document.\\nTo summarize, the process for creating Fig.1 is as follows:\\nFirst, we extract features from the training data for each of the four models presented in Fig.1: (a) audio baseline, (b) image baseline (w/o distillation), (c) image model trained with MSE distillation, and (d) image model trained with our CIBA framework. The extracted features form matrices of size $D$ (feature dimension) by $N$ (number of samples).\\nThen, all features are concatenated along the dimensional axis to form a $4D \\\\times N$ matrix, which is subsequently projected into a 2D space using the t-SNE algorithm (i.e., $4D\\\\times N \\\\rightarrow 4D \\\\times 2$). Please note that the projection is performed along $N$, not $D$, to observe the distribution of modality-general and modality-specific information inherent in the learned features.\\nFinally, to enable clear comparisons of the projected features, we present visualizations of each image model\\u2019s features alongside those of the teacher (audio) model.\"}",
"{\"summary\": \"This paper investigates the relationship between distributional shifts across modalities and their impact on the effectiveness of cross-modal knowledge distillation (CMKD), specifically addressing the issue of cross-modal feature distillation. The authors hypothesize and validate that the modality gap between the teacher and student models may lead to dimensional collapse in the student\\u2019s feature space. To address this, they propose a Cross-modal Information Bottleneck Approximation (CIBA) scheme aimed at extracting and transferring modality-general features from the teacher model. Experimental results demonstrate that the proposed distillation strategy effectively mitigates dimensional collapse in the student model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe authors successfully propose and validate that the modality gap between the teacher and student models can lead to dimensional collapse in the student\\u2019s feature space.\\n\\n2.\\tA novel Cross-modal Information Bottleneck Approximation (CIBA) scheme is introduced to extract and transfer modality-general features from the teacher model.\\n\\n3.\\tExperimental results across various loss functions and tasks provide strong evidence for the effectiveness of the proposed method.\", \"weaknesses\": \"1.\\tThe work is predicated on the assumption of linear feature extractors; however, in practical applications, most feature extractors are non-linear.\\n\\n2.\\tIn the MM-IMDB dataset, the observed improvement is marginal. Could you please provide a more detailed explanation for this finding?\", \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"We sincerely appreciate your positive evaluation of our work. In particular, we are grateful for your acknowledgment of the core contributions of our paper: **investigation of the influence of the modality gap on CMFD in relation to the dimensional collapse phenomenon**, as well as **proposing CIBA framework** and its **experimental validation** aimed at mitigating the impact of the modality gap.\\n\\nWe would also be deeply grateful if you could *consider providing an improved evaluation*, should our revisions have sufficiently addressed your concerns.\\n\\n \\n\\n## **Empirical Extension of Theoretical Analysis to Non-linear Settings**\\n\\nWhile providing theoretical analyses of dimensional collapse in non-linear settings may seem appealing, the pursuit of such analyses itself constitutes a distinct and interesting field of research, encompassing areas such as identifiability and independent component analysis [1,2]. It is worth noting that **prior works on the modality focusing hypothesis [3] and dimensional collapse [4] also established their theoretical validity using linear feature extractors**, likely due to the inherent challenges of addressing non-linear settings.\\n\\nWe would like to emphasize that we have **empirically demonstrated how theoretical insights from simple linear settings can be extended to complex non-linear settings**, including real-world multi-modal datasets. Specifically, in Section 5.1.1, we showed that the MSE loss still causes dimensional collapse in real-world settings (Fig.5(a)), whereas our CIBA framework successfully alleviates this issue (Tab.1(a)). \\n\\n> [1] H\\u00e4lv\\u00e4 et al., Disentangling identifiable features from noisy data with structured nonlinear ICA, NeurIPS, 2021. \\\\\\n[2] Hyv\\u00e4rinen et al., Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning, Patterns, 2023. \\\\\\n[3] Xue et al., The modality focusing hypothesis: Towards understanding crossmodal knowledge distillation, ICLR 2024. \\\\\\n[4] Jing et al., Understanding dimensional collapse in contrastive self-supervised learning, ICLR 2022.\\n\\n \\n\\n## **Significance of Experimental Results on MM-IMDB Dataset**\\n\\nIn Appendix E.1, we provided statistical analyses of the results from Tab.1, including an analysis of the results on the MM-IMDB dataset. To summarize, the proposed method demonstrates **a statistically significant improvement in F1-macro performance** compared to the MSE baseline. Notably, on **the long-tailed MM-IMDB dataset**, the improvement in F1-macro, which measures the average performance across classes, highlights that **our proposed method enables the student model to learn diverse and discriminative features**, further validating these findings.\"}",
"{\"summary\": \"This paper tries to address the challenges associated with deploying multi-modal neural networks in real-world applications, specifically focusing on the constraints of limited computing resources and complex sensor configurations. The authors explore Cross-Modal Knowledge Distillation (CMKD) as a solution for transferring knowledge from a pretrained teacher model to a more deployable student model tailored to a target modality. Despite the advancements in CMKD across various domains, the paper identifies a gap in understanding how distributional shifts between modalities\\u2014referred to as the modality gap\\u2014affect the efficacy of feature distillation. The study hypothesizes and empirically demonstrates that a significant modality gap leads to dimensional collapse within the student model's feature space, undermining performance. To mitigate this issue, the authors introduce the Cross-modal Information Bottleneck Approximation (CIBA) scheme, designed to extract and transfer modality-general features from the teacher model effectively. Experimental results on diverse real-world multi-modal datasets confirm that the proposed CIBA method successfully reduces dimensional collapse in the student model, resulting in enhanced performance. This work contributes a deeper understanding of the interplay between modality gaps and knowledge transfer in CMKD, offering a practical solution to improve the deployment of multi-modal neural networks under resource constraints.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The theoretical and empirical investigation of the \\\"modality gap\\\"\\u2014the distributional shifts between different modalities\\u2014and its detrimental effect on CMKD, specifically leading to dimensional collapse in the student model\\u2019s feature space.\\n\\n2. CIBA extracts modality-general features from the teacher model and transfers them to sub-dimensions of the student\\u2019s features. This method mitigates the dimensional collapse, ensuring more robust and effective knowledge transfer.\", \"weaknesses\": \"1. Since RAVDESS is a relatively small size dataset. Do you try to work on VGGSound?\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear all reviewers,\\\\\\n\\\\\\nWe would like to extend our sincere gratitude for your constructive feedback and thoughtful engagement during the discussion period. \\n\\nAs the discussion period comes to a close, we would like to highlight that **no additional questions or concerns were raised following our rebuttal**. We hope this indicates that **our responses have sufficiently addressed your comments and resolved any outstanding issues**.\\n\\nIn this context, we kindly and respectfully ask you to **consider an upward adjustment to your evaluation score**.\\n\\nThank you once again for your time, effort, and valuable contributions.\\n\\n\\\\\\nBest regards,\\\\\\nAuthors\", \"title\": \"Final Comments from the Authors\"}",
"{\"comment\": \"We appreciate the constructive comments from reviewer \\\"9LAr\\\", recognizing the strengths of our work as **writing quality** and **identifying the effect of the modality gap on the dimensional collapse**.\\n\\nWe hope that our comments below adequately address your remaining concerns and lead to an *increased rating of our work*.\\n\\n \\n\\n## **Robustness with Respect to Encoder Capacity**\\n\\nAs discussed in Section 3.6, learned features in practical settings often contain a mixture of modality-general information, modality-specific information, and sensor noise. \\nWe acknowledge the reviewer's concern that the degree of such information mixing may vary depending on the capacity of the encoder.\\nTo rigorously address this concern and demonstrate the generalizability of our approach, we conducted additional experiments on the large-scale VGG-Sound dataset using backbones of varying sizes (ResNet-18 and ResNet-50). \\nThe results are presented in Section 5.4 and Appendix E.2 of the revised document. \\nFor your convenience, we provide a summary of the key findings in the table below.\\n\\n>**VGG-Sound**\\n| Method | Modality | Val. | Test |\\n|---|:---:|:---:|:---:|\\n| Video (ResNet50 backbone) | V50 | 50.42 | 49.43 |\\n| Audio (ResNet50 backbone) | A50 | 69.55 | 68.76 |\\n| Video (ResNet18 backbone) | V18 | 42.11 | 41.53 |\\n| Audio (ResNet18 backbone) | A18 | 68.86 | 69.08 |\\n|||||\\n| MSE | V18 $\\\\rightarrow$ A18 | 67.54 | 68.32 |\\n| MSE + CIBA | V18 $\\\\rightarrow$ A18 | 70.11 | 70.39 |\\n|||||\\n| MSE | V50 $\\\\rightarrow$ A18 | 68.53 | 68.54 |\\n| MSE + CIBA | V50 $\\\\rightarrow$ A18 | 70.21 | 70.71 |\\n|||||\\n| MSE | A18 $\\\\rightarrow$ V18 | 42.61 | 41.28 |\\n| MSE + CIBA | A18 $\\\\rightarrow$ V18 | 43.59 | 42.55 |\\n|||||\\n| MSE | A50 $\\\\rightarrow$ V18 | 41.40 | 40.33 |\\n| MSE + CIBA | A50 $\\\\rightarrow$ V18 | 43.44 | 42.95 |\\n\\nExperiments show that our approach consistently outperforms the vanilla distillation strategy (i.e., MSE loss).\\nThese findings confirm that **the CIBA framework retains its superiority regardless of the encoder's capacity**.\\n\\n \\n\\n## **Our Contributions**\\n\\nWhile several studies have attempted to mitigate the modality gap in CMFD [1, 2], to the best of our knowledge, our work is the **first to thoroughly analyze the impact of the modality gap on CMFD in relation to dimensional collapse and to propose the CIBA framework as a solution**.\\nWe acknowledge the reviewer's observation that the Information Bottleneck (IB) and KL loss are widely used techniques in various applications.\\nHowever, our contribution lies in leveraging these methods to specifically address **the unique issue of dimensional collapse in CMFD**, which we believe represents a significant and pioneering advancement in this domain.\\nFurthermore, we have demonstrated **the effectiveness of the CIBA framework across various real-world datasets**, including large-scale datasets such as VGG-Sound (audio-video) and nuScenes (image-LiDAR).\\nWhile the adopted methods, IB and KL loss, may appear straightforward and well-known, this does not diminish the significance of our analyses and demonstrations.\\nPlease note that our methodological contributions have been acknowledged and appreciated by reviewers \\\"XgGs,\\\" \\\"AFwU,\\\" \\\"nuh2,\\\" and \\\"i91g.\\\"\\n\\n> [1] Xue et al., The modality focusing hypothesis: Towards understanding crossmodal knowledge distillation, ICLR 2024 \\n[2] Sarkar1 et al., XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning, AAAI 2024\"}",
"{\"comment\": \"We appreciate the constructive comments from reviewer \\\"XgGs,\\\" recognizing the strengths of our work as **fresh approach to understanding the limitations of CMKD**, **proposing a creative distillation scheme**, and **writing quality**.\\n\\nWe hope that our comments below adequately address your remaining concerns and lead to an *increased rating of our work*.\\n\\n \\n\\n## **Our Contributions**\\n\\nWhile several studies have attempted to mitigate the modality gap in CMFD [1, 2], to the best of our knowledge, our work is **the first to thoroughly analyze the impact of the modality gap on CMFD in relation to dimensional collapse and to propose the CIBA framework as a solution**.\\nWe acknowledge the reviewer's observation that the Information Bottleneck (IB) is widely used techniques in various applications.\\nHowever, our contribution lies in leveraging these methods to specifically address the unique issue of dimensional collapse in CMFD, which we believe represents a significant and pioneering advancement in this domain.\\n\\nFurthermore, we have demonstrated **the effectiveness of the CIBA framework across various real-world datasets**, including large-scale datasets such as VGG-Sound (audio-video) and nuScenes (image-LiDAR).\\nWhile the adopted methods, IB may appear straightforward and well-known, this does not diminish the significance of our analyses and demonstrations.\\nPlease note that our methodological contributions have been acknowledged and appreciated by reviewers \\\"AFwU,\\\" \\\"nuh2,\\\" and \\\"I91G.\\\"\\n\\n> [1] Xue et al., The modality focusing hypothesis: Towards understanding crossmodal knowledge distillation, ICLR 2024 \\n[2] Sarkar1 et al., XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning, AAAI 2024\\n\\n \\n\\n## **Uniqueness of CIBA**\\n\\nIn Section 3 and Fig.1, we theoretically and empirically demonstrate that transferring modality-general features from the teacher to the student is crucial for improving the quality of feature distillation. Based on **these unique insights**, we introduced the information bottleneck approach to approximate the modality-general features and distill them into sub-dimensional representations of the student\\u2019s features.\\nOur approach fundamentally differs from prior CMKD methods, which primarily aim to **mimic the entire features of the teacher without considering potential dimensional collapse issues**. Furthermore, we have demonstrated the effectiveness of our CIBA framework across four different real-world multi-modal datasets: RAVDESS (Audio-Image), VGG-Sound (Audio-Video), MM-IMDB (Image-Text), and nuScenes (LiDAR-Camera).\\n\\n\\n \\n\\n## **Comparison to Output-level (Logit-level) Distillation Methods**\\n\\nThank you for recommending interesting prior works from the knowledge distillation literature. We reviewed the details of each work and found them to be compelling, with a focus on output-level (logit-level) distillation. However, this focus seems to be slightly misaligned with our primary objective, which is **\\\"feature-level distillation**\\\".\\nDML [1] utilizes multiple students and promotes their collaborative learning by minimizing the discrepancy in predictions across students, while DKD [2], DIST [3], and C2KD [4] propose methods to exploit the relationship between predictions and target class distributions. In particular, [1-3] focus on uni-modal distillation approaches, whereas C2KD [4] addresses the cross-modal distillation problem, which aligns with our motivation.\\n\\nWe believe that **feature-level and output-level distillation strategies present distinct challenges**, necessitating thorough analyses to better understand their unique dynamics. While we recognize the importance of addressing dimensional collapse for achieving robust cross-modal feature distillation performance, **we did not explicitly analyze this issue at the output prediction stage**. Consequently, we are uncertain about how to directly compare our approach with existing methods in this context.\\nHowever, we believe that combining feature-level and output-level distillation strategies could create synergies to further enhance the quality of CMKD. We leave this exploration for future work.\\n\\n> [1] Ying Zhang, Tao Xiang, Timothy M. Hospedales, and Huchuan Lu. Deep mutual learning. In CVPR, 2018. \\\\\\n[2] Borui Zhao, Quan Cui, Renjie Song, Yiyu Qiu, and Jiajun Liang. Decoupled knowledge distillation. In CVPR, 2022. \\\\\\n[3] Tao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu. Knowledge distillation from a stronger teacher. In NeurIPS, 2022. \\\\\\n[4] Huo F, Xu W, Guo J, et al. C2KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 16006-16015.\"}",
"{\"title\": \"Feedback reminder\", \"comment\": \"Dear Reviewer AFwU,\\n\\nWe sincerely appreciate the time and effort that you have dedicated to reviewing our paper. Considering the significance of the discussion phase, we would like to ask if you could spare some time to review our rebuttal responses.\\n\\nTaking into account your comments, we have conducted additional validation on a relatively large dataset (VGG-Sound). We hope that these results will help address your concerns and lead to an improved evaluation of our work.\\n\\nBest regards,\\nThe authors.\"}",
"{\"comment\": \"## **Relaxation of Orthogonality Assumption**\\n\\nAs described in Section 3.6, the orthogonality assumption may not hold in real-world datasets since learned features often contain a mixture of modality-general information, modality-specific information, and sensor noise. Meanwhile, we have shown that **dimensional collapse occurs even in nonlinear real-world datasets**, as depicted in Fig.5(a). Furthermore, we have demonstrated that **addressing such issues significantly improves the quality of CMFD**, as evidenced by our extensive experimental results (Tab.1, 2, and 3).\\n\\nTo further address the reviewer's concern, we conducted additional experiments where the orthogonality assumption was removed from the synthetic datasets discussed in Section 3.5. Specifically, we omitted the Gram-Schmidt process during synthetic data generation, meaning the data were generated following a unit normal Gaussian distribution and were **not fully orthogonal**.\\nAs shown in Fig.10 in Appendix E.6, The spectrum of singular values of the student features also decreases as the dimension of modality-general features decreases, although the distributions are less distinctive compared to those of orthogonal features (Fig.3). This reduction in the spectrum indicates the occurrence of dimensional collapse. Therefore, **our claims regarding modality-general information and dimensional collapse remains valid even in scenarios where the assumption of orthogonality is relaxed**.\\n\\n \\n\\n## **Ablation on Hyperparameter $H$**\\n\\nAs the reviewer pointed out, the bottleneck dimension parameter $H$ plays a crucial role in determining the quality of cross-modal feature distillation.\\nWe also elaborated that the optimal value of $H$ is proportional to the amount of modality-general information present in the teacher\\u2019s features (L427-431) and is also influenced by the method used to transfer this information (L410-413).\\nIn addition to the ablation results from RAVDESS presented in Fig.5(c), we also have showcased the impact of $H$ in LiDAR-Camera cross-modal feature distillation in Tab.2 by alternating the parameter $H$, where $H=4$ shows the best distillation performance.\\nTo further address the reviewer's concern, we provide **the ablation of $H$ on MM-IMDB and VGGSound dataset in the revised Appendix E.7**.\\nNotably, except for extreme values of $H$, **the proposed method consistently outperforms the MSE method**, as shown in Fig.11 and 12.\"}",
"{\"comment\": \"Thank you for your response and additional experiments, my main concerns have been answered. I'd like to raise my score to 6.\"}",
"{\"title\": \"Official Comment by Reviewer i91g\", \"comment\": \"Thank you for the detailed responses and additional experiments! I maintain the original score.\"}",
"{\"summary\": \"In this paper, the author mainly propose to solve the problem of dimensional collapse caused by modality gap in cross-modal knowlegde distillation task. Firstly, the author demonstrates the impact of modality gap on cross-modal features theoretically and empirically. To combat with this issue, a Cross-modal Information Bottleneck Approximation (CIBA) framework is proposed that extracts modality-general features through a bottleneck structure, meanwhile aligning teacher and student features with an additional loss. Experiments on several datasets demonstrates the performance of CIBA.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written, which is quite easy to follow.\\n2. The author sufficiently demonstrates that modality gap can cause dimensional collapse, leading to suboptimal performance.\", \"weaknesses\": \"1. My main concern is about the generalizability of the method. As mentioned in problem statements and limitations, the theorem is established on linear extractor, which is inconsistent with practical applications where non-linear encoders are widely applied. Under such conditions, the proposed concept of modality-general and modality-specific parts could be vague since the better capacity of the encoders. I can truly understand the difficulty of theorical provement with non-linear extractors, while the direct application of CIBA seems to be accessible. Can you provide further results of CIBA compared to SOTAs with more powerful encoders on current datasets (RAVDESS and MM-IMDB) to prove the superiority?\\n2. In the method part, a bottleneck structure is utilized to capture mutual modality information. From my point of view, the dimension of the bottleneck feature may be a crucial parameter affecting the granularity of the extracted information. Performance seems to be fluctuant with chaning values of the param according to Fig.5(c). Can you provide more ablation on this param on more datasets? How do you choose the best bottleneck dimension?\\n3. The author mainly focus on the introduction and demonstration of modality gap's impact on dimensional collapse, while the introduction of method seems to be ordinary and unremarkable. Besides, since the information bottleneck structure was proposed by earlier research, and the proposed loss is a direct combination of generation loss and KL loss, the novelty of the paper is somehow limited.\", \"questions\": \"Please refer to the cons part. I will moderately raise my score if the authors can provide further experimental results and answer my questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"## **Ablation on Hyperparameter $H$**\\n\\nAs the reviewer pointed out, the bottleneck dimension parameter $H$ plays a crucial role in determining the quality of cross-modal feature distillation.\\nWe also elaborated that the optimal value of $H$ is proportional to the amount of modality-general information present in the teacher\\u2019s features (L427-431) and is also influenced by the method used to transfer this information (L410-413).\\nIn addition to the ablation results from RAVDESS presented in Fig.5(c), we also have showcased the impact of $H$ in LiDAR-Camera cross-modal feature distillation in Tab.2 by alternating the parameter $H$, where $H=4$ shows the best distillation performance.\\nTo further address the reviewer's concern, we provide the ablation of $H$ on the MM-IMDB and the additional VGGSound dataset in the Appendix E.7 of the revised document.\\nNotably, except for extreme values of $H$, the proposed method consistently outperforms the MSE method, as shown in Fig.11 and 12 in the revised manuscript.\\n\\n \\n\\n## **Computationally Efficient Surrogate for Estimating $H$**\\n\\nInstead of performing a grid search with training student models, we may use the bottleneck models as an efficient surrogate to estimate the optimal $H$ with relatively low computational cost, as described in Section 5.1.3.\\nAs an example, we first train bottleneck models with various $H$ values. Since the bottleneck model is significantly lighter than both the student and teacher models, it requires substantially fewer computational resources for training. We then analyze the singular value spectrum of the trained bottleneck features, as illustrated in Fig.5(b). Subsequently, we may select the minimum $H$ value at which the spectrum saturates. As demonstrated in Fig.5(c), this value consistently yields near best performance.\\n\\nIn Appendix E.7, we extended the spectrum analyses presented in Fig.5 to the VGG-Sound dataset and observed similar results. Specifically, the spectrum (first row of Fig.12) converges around $H=16$. $H=16$ consistently achieves results close to the best (second row of Fig.12). Moreover, except for extreme values of $H$, the proposed method consistently outperforms the MSE method.\"}",
"{\"comment\": \"We sincerely appreciate your positive evaluation of our work. In particular, we are grateful for your recognition of **the significance of our contributions to CMKD literature and the accompanying modality gap analysis**, as well as **the novelty of the proposed CIBA framework**. Additionally, we thank you for highlighting our **theoretical contributions**, **extensive experiments**, and **writing quality**.\\n\\n \\n\\n## **Additional Experimental Validation on VGG-Sound dataset**\\n\\nTo address the reviewer\\u2019s concerns, we conducted **additional experiments on VGG-Sound dataset**. For detailed experimental results and analyses, please refer to Section 5.4 and Appendix E.2 of the revised document. For your convenience, we provide a summary of the key findings in the table below. \\nHere we observe that **our proposed method achieves significant performance improvements compared to the MSE baseline in both video-to-audio and audio-to-video scenarios**. Furthermore, we conducted experiments using various backbones and consistently observed performance gains.\\n\\n>**VGG-Sound**\\n| Method | Modality | Val. | Test |\\n|---|:---:|:---:|:---:|\\n| Video (ResNet50 backbone) | V50 | 50.42 | 49.43 |\\n| Audio (ResNet50 backbone) | A50 | 69.55 | 68.76 |\\n| Video (ResNet18 backbone) | V18 | 42.11 | 41.53 |\\n| Audio (ResNet18 backbone) | A18 | 68.86 | 69.08 |\\n|||||\\n| MSE | V18 $\\\\rightarrow$ A18 | 67.54 | 68.32 |\\n| MSE + CIBA | V18 $\\\\rightarrow$ A18 | 70.11 | 70.39 |\\n|||||\\n| MSE | V50 $\\\\rightarrow$ A18 | 68.53 | 68.54 |\\n| MSE + CIBA | V50 $\\\\rightarrow$ A18 | 70.21 | 70.71 |\\n|||||\\n| MSE | A18 $\\\\rightarrow$ V18 | 42.61 | 41.28 |\\n| MSE + CIBA | A18 $\\\\rightarrow$ V18 | 43.59 | 42.55 |\\n|||||\\n| MSE | A50 $\\\\rightarrow$ V18 | 41.40 | 40.33 |\\n| MSE + CIBA | A50 $\\\\rightarrow$ V18 | 43.44 | 42.95 |\\n\\n\\n \\n\\n## **Comparison to Task-oriented Feature Distillation**\\n\\nTask-oriented feature distillation (TOFD) is a methodology that extracts \\\"task-oriented information\\\" from the teacher's features by simultaneously training an auxiliary classifier during the distillation process. Following the reviewer\\u2019s suggestion, we directly **applied TOFD to the VGG-Sound dataset**, and the experimental results are presented in the table below.\\n\\n>**VGG-Sound**\\n| Method | Modality | Val. | Test |\\n|---|:---:|:---:|:---:|\\n| Video (ResNet18 backbone) | V18 | 42.11 | 41.53 |\\n| Audio (ResNet18 backbone) | A18 | 68.86 | 69.08 |\\n|||||\\n| MSE | V18 $\\\\rightarrow$ A18 | 67.54 | 68.32 |\\n| Task-Oriented | V18 $\\\\rightarrow$ A18 | 67.23 | 67.73 |\\n| MSE + CIBA | V18 $\\\\rightarrow$ A18 | 70.11 | 70.39 |\\n\\nTo the best of our understanding, **TOFD does not explicitly address the separation of modality-general and modality-specific features**, which is crucial for improving the quality of cross-modal distillation. As a result, TOFD, like MSE, exhibits suboptimal performance due to the influence of modality gaps.\\n\\n\\n \\n\\n## **Integrating Uni-Modal Feature Distillation Methods into Our Framework**\\n\\nIn uni-modal feature distillation settings, there is **no need to consider modality-general and modality-specific features, as the modality of inputs for the teacher and student models are identical**. As a result, global feature distillation methods--which force the student model's features to mimic the whole features of the teacher--are typically employed and have demonstrated strong performance.\\nAlthough MSE and cross-entropy (CE) losses have contributed to improving the quality of uni-modal knowledge distillation, these approaches are less effective in cross-modal settings due to the modality gap, which hinders effective cross-modal distillation, as demonstrated in Tab.1, 2, and 3.\\nIn the above comment, we also have shown that task-oriented uni-modal distillation approach lead to suboptimal cross-modal distillation results.\\nInstead, our CIBA framework achieves improved quality of cross-modal distillation.\"}",
"{\"summary\": \"This paper investigates the problem of dimensional collapse in cross-modal feature distillation (CMFD), where a student model trained on one modality aims to mimic the feature representations of a teacher model trained on a different modality. The authors hypothesize that the distributional shift, or \\\"modality gap\\\", between the teacher and student modalities leads to the student's feature space collapsing to only capture the modality-general features, resulting in suboptimal distillation performance. To address this issue, the authors provide in-depth analysis on how distributional shifts across different modalities and propose a Cross-modal Information Bottleneck Approximation (CIBA) scheme that extracts and transfers the modality-general features from the teacher to the student, allowing the student to effectively span both modality-general and modality-specific feature spaces.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Research about cross-modal knowledge distillation (CMKD) on the feature view is an important topic for multimodal learning and knowledge distillation. This paper analyses the dimensional collapse induced by modality gap and propose Cross-modal Information Bottleneck Approximation (CIBA) to disentangle the general and specific knowledge, which is novel and practical.\\n2. Utilizing the Mean Squared Error (MSE) loss for feature distillation (FD) is reasonable and suitable for the subsequent theoretical analysis.\\n3. This work is a good extension of the modality focusing hypothesis, and gives a solid analysis and detailed solutions.\\n4. This work is well written and organized. Extensive experiments on Audio-Image, Image-Text, and LiDAR-Camera crossmodal transfer are conducted.\", \"weaknesses\": \"1. Modality gap is widely studies in multimodal learning, and this paper does not give a review of previous modality gap analysis. Moreover, the cross-modal knowledge distillation on logit-level method [r1] is not mentioned and analysed.\\n[r1] Huo F, Xu W, Guo J, et al. C2KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 16006-16015.\\n2. The RAVDESS, MM-IMDB, and nuScenes have limited class categories. Large-scale experiments like conducting experiments on VGG-Sound (or subset) will make the paper more convincing.\\n3. Related works about the 'Cross-modal knowledge distillation' are somewhat out-of-date, only one paper published in 2023 is mentioned.\\n4. The proposed method is somewhat similar to online distillation [r1] and task-oriented feature distillation [r2]. How about the performance of directly employing task-oriented feature distillation [r2] on cross-modal feature distillation?\\n[r2]Zhang L, Shi Y, Shi Z, et al. Task-oriented feature distillation[J]. Advances in Neural Information Processing Systems, 2020, 33: 14759-14771.\", \"questions\": \"1. How is the Figure 1 formulated? The manuscript does not mention the details. I think it is important for the motivation of modality-general and modality-specific knowledge analysis.\\n2. How about directly apply unimodal knowledge distillation on crossmodal knowledge distillation? Could the proposed method be integrated into SOTA methds?\\n\\n[r1] Zhang L, Shi Y, Shi Z, et al. Task-oriented feature distillation[J]. Advances in Neural Information Processing Systems, 2020, 33: 14759-14771. \\n\\n[r2] Huang T, You S, Wang F, et al. Knowledge distillation from a stronger teacher[J]. Advances in Neural Information Processing Systems, 2022, 35: 33716-33727.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Reviewer AFwU\", \"comment\": \"Thank you for your comprehensive answers and the additional experiments. I will retain the original score.\"}",
"{\"comment\": \"We would like to express our sincere gratitude to all reviewers for their thoughtful and constructive comments on our manuscript. In this rebuttal, we have carefully addressed each comment raised by the reviewers. For ease of reference, we have organized our responses to individual comments under their respective points.\\n\\nAdditionally, in the revised version of the manuscript, all changes have been highlighted in **blue** to clearly indicate the modifications made in response to the reviewers\\u2019 suggestions. Thank you again for your time and effort in reviewing our work.\"}",
"{\"comment\": \"We sincerely appreciate your positive evaluation of our work. In particular, we are grateful for your acknowledgment of the core contributions of our paper: **investigation of the influence of the modality gap on CMFD in relation to the dimensional collapse phenomenon**, as well as **proposing CIBA framework** and its **experimental validation** aimed at mitigating the impact of the modality gap.\\n\\nWe would also be deeply grateful if you could *consider providing an improved evaluation*, should our revisions have sufficiently addressed your concerns.\\n\\n \\n\\n## **Additional Experimental Validation on VGG-Sound dataset**\\n\\nTo address the reviewer\\u2019s concerns, we conducted **additional experiments on VGG-Sound dataset**. For detailed experimental results and analyses, please refer to Section 5.4 and Appendix E.2 of the revised document. For your convenience, we provide a summary of the key findings in the table below. \\nHere we observe that **our proposed method achieves significant performance improvements compared to the MSE baseline in both video-to-audio and audio-to-video scenarios**. Furthermore, we conducted experiments using various backbones and consistently observed performance gains. \\n\\n>**VGG-Sound**\\n| Method | Modality | Val. | Test |\\n|---|:---:|:---:|:---:|\\n| Video (ResNet50 backbone) | V50 | 50.42 | 49.43 |\\n| Audio (ResNet50 backbone) | A50 | 69.55 | 68.76 |\\n| Video (ResNet18 backbone) | V18 | 42.11 | 41.53 |\\n| Audio (ResNet18 backbone) | A18 | 68.86 | 69.08 |\\n|||||\\n| MSE | V18 $\\\\rightarrow$ A18 | 67.54 | 68.32 |\\n| MSE + CIBA | V18 $\\\\rightarrow$ A18 | 70.11 | 70.39 |\\n|||||\\n| MSE | V50 $\\\\rightarrow$ A18 | 68.53 | 68.54 |\\n| MSE + CIBA | V50 $\\\\rightarrow$ A18 | 70.21 | 70.71 |\\n|||||\\n| MSE | A18 $\\\\rightarrow$ V18 | 42.61 | 41.28 |\\n| MSE + CIBA | A18 $\\\\rightarrow$ V18 | 43.59 | 42.55 |\\n|||||\\n| MSE | A50 $\\\\rightarrow$ V18 | 41.40 | 40.33 |\\n| MSE + CIBA | A50 $\\\\rightarrow$ V18 | 43.44 | 42.95 |\"}",
"{\"comment\": \"We sincerely appreciate your recognition of the contributions of our work and your decision to maintain the acceptance score. Thank you again for your careful consideration and the time and effort you have dedicated to this review.\"}"
]
} |
19QWQSsbOA | Multi-scale Conditional Generative Modeling for Microscopic Image Restoration | [
"Luzhe Huang",
"Xiongye Xiao",
"Shixuan Li",
"Yi Huang",
"Aydogan Ozcan",
"Paul Bogdan"
] | The advance of diffusion-based generative models in recent years has revolutionized state-of-the-art (SOTA) techniques in a wide variety of image analysis and synthesis tasks, whereas their adaptation on image restoration, particularly within computational microscopy remains theoretically and empirically underexplored. In this research, we introduce a multi-scale generative model that enhances conditional image restoration through a novel exploitation of the Brownian Bridge process within wavelet domain. By initiating the Brownian Bridge diffusion process specifically at the lowest-frequency subband and applying generative adversarial networks at subsequent multi-scale high-frequency subbands in the wavelet domain, our method provides significant acceleration during training and sampling while sustaining a high image generation quality and diversity on par with SOTA diffusion models. Experimental results on various computational microscopy and imaging tasks confirm our method's robust performance and its considerable reduction in its sampling steps and time. This pioneering technique offers an efficient image restoration framework that harmonizes efficiency with quality, signifying a major stride in incorporating cutting-edge generative models into computational microscopy workflows. | [
"Microscopic Image Restoration",
"Generative Model"
] | Reject | https://openreview.net/pdf?id=19QWQSsbOA | https://openreview.net/forum?id=19QWQSsbOA | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rFI57UbR33",
"pm3Pel1lRA",
"jkTsZCtPo1",
"jLSHY8Wjrg",
"ggavYra32c",
"anFzu5PxAn",
"X5KZfjStz8",
"IiFvErSQhz",
"D05oPW0eON",
"Av0jAQHZUZ",
"AZA25KuW6C",
"7ZdQYaX9GB",
"6MzQGMZVl6",
"5WqpUq0U6j"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732863006058,
1730507474864,
1730687152097,
1732778435930,
1730469829157,
1730091746271,
1732778403819,
1732782220665,
1737524120924,
1732859279587,
1734879938342,
1732778068357,
1732781748768,
1732781704347
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11382/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11382/Reviewer_Ytoo"
],
[
"ICLR.cc/2025/Conference/Submission11382/Reviewer_a2Tp"
],
[
"ICLR.cc/2025/Conference/Submission11382/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11382/Reviewer_yXmP"
],
[
"ICLR.cc/2025/Conference/Submission11382/Reviewer_CZbw"
],
[
"ICLR.cc/2025/Conference/Submission11382/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11382/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11382/Reviewer_CZbw"
],
[
"ICLR.cc/2025/Conference/Submission11382/Area_Chair_HFR9"
],
[
"ICLR.cc/2025/Conference/Submission11382/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11382/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11382/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for your timely response!\", \"comment\": \"Thank you very much for your timely response. We are really glad that our answer has addressed your concerns.\\n\\nWe realize that we mistakenly included results from different experiments in our comparison. We have now corrected this error and updated the results accordingly.\\n\\nOnce again, thank you for your correction and for your valuable effort. Your insights are greatly appreciated!\"}",
"{\"summary\": \"The authors propose a multi-scale conditional generative model (MSCGM) for image restoration, incorporating multi-scale wavelet transforms and a Brownian bridge stochastic process. The wavelet transform is included due to its reversibility, which maintains information integrity in the latent diffusion space, in contrast to traditional Latent Diffusion Models (LDM). The Brownian bridge stochastic process is leveraged to introduce conditional images in both forward and reverse processes. While the authors aim to address microscopic image restoration, the motivation and results in the paper do not consistently support this focus.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The authors recognize the loss of detail in LDM, a known issue, and apply it to the microscopic image restoration context, an interesting direction.\\n2. They introduce the novel idea that the Brownian bridge stochastic process could effectively integrate conditional images.\", \"weaknesses\": \"1. **Lack of Consistency:** The paper lacks organization and clarity. Although the title emphasizes \\\"Microscopic Image Restoration,\\\" the experiments primarily focus on \\\"Natural Image Super-resolution\\\" and \\\"Low-light Natural Image Enhancement.\\\" Only a small subset of results explores microscopic images. If the model is intended for general image restoration, it would be more accurate to propose it as a \\u2018unified image restoration\\u2019 model. I suggest the authors either refocus their experiments more heavily on microscopic image restoration to align with the title, or broaden the title to reflect the wider scope of image restoration tasks covered in the paper.\\n \\n2. **Introduction Needs Refinement:** The introduction lacks a clear problem definition and research motivation. The first two paragraphs provide a broad overview of diffusion processes that diverges from the paper\\u2019s focus. The discussion on latent diffusion downsampling is a well-known issue and could be alleviated by higher resolutions. The authors should clearly articulate why microscopic images especially require the multi-scale wavelet transform in the introduction. Please include a discussion of how their approach compares to or builds upon these existing wavelet-based diffusion models in the Introduction, highlighting any key differences or improvements.\\n \\n3. **Lack of Acknowledgment of Prior Work:** The paper does not credit previous studies applying wavelet transforms in diffusion models, which could mislead readers into believing the concept originated here. Papers like \\\"Wavelet Diffusion Models are Fast and Scalable Image Generators (CVPR 2023)\\\" and \\\"Training Generative Image Super-Resolution Models by Wavelet-Domain Losses Enables Better Control of Artifacts (CVPR 2024)\\\" are directly related and should be cited with comparisons to clarify this study\\u2019s contributions.\\n \\n4. **Figure 1 Illustration Issues:** The paper title focuses on \\\"Microscopic Image Restoration,\\\" yet Figure 1 uses natural images. Including examples of microscopic images to show the degradations introduced by LDM and Refusion compared to MSCGM would enhance clarity.\\n \\n5. **Methodology Development Clarity:** The description of the wavelet transform on page 4 is overly general, with key details moved to the appendix. Clear explanations of any novel model designs or algorithmic adaptations should be provided in the main text.\\n \\n6. **Quality of Mathematical Presentation:** Symbols in the equations are used without proper declarations or explanations. Inconsistent symbols, like the variable for the normal distribution \\\\( N \\\\), further detract from clarity.\\n \\n7. **Algorithm 1 Lack of Context:** Algorithm 1 on page 5 is underdeveloped. Symbols are not defined before use, and the algorithm lacks defined input-output requirements.\\n \\n8. **Figure 2 Diagram Confusion:** Figure 2 is difficult to interpret. The illustration doesn\\u2019t clearly label network modules, workflow processes, or shared parameters (only a line is shown), which fails to clarify the model structure effectively.\\n \\n9. **Lack of Dataset Information:** The results section includes evaluations of microscopic images, but there\\u2019s no description of the dataset. Is it public or private? What is the image count? Without these details, readers cannot analyze or reproduce the results. Please provide a detailed description of the microscopic image dataset used, including its source, size, and any preprocessing steps applied.\\n \\n10. **Insufficient Ablation Studies:** Results provide only a simple comparison with LDM, without deeper exploration of MSCGM\\u2019s components or ablation studies to justify the performance benefits of each module.\\n \\n11. **Unconvincing Model Performance:** The model\\u2019s performance requires further validation through comparison with advanced models. Numerous diffusion-based image restoration models from 2024 exist, yet none are used for comparison. This weakens the paper\\u2019s credibility. Key diffusion-based image restoration works worth considering include: \\n - RDDM ([link](https://cvpr.thecvf.com/virtual/2024/poster/31373)) \\n - HIR-Diff ([link](https://cvpr.thecvf.com/virtual/2024/poster/29665)) \\n - WF-Diff ([link](https://cvpr.thecvf.com/virtual/2024/poster/30059)) \\n - DeqIR ([link](https://cvpr.thecvf.com/virtual/2024/poster/31759)) \\n - GDP ([link](https://cvpr.thecvf.com/virtual/2023/poster/22095))\", \"questions\": \"Please see my concerns in Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces a multi-scale conditional generative model (MSCGM) aimed at enhancing microscopic image restoration by combining wavelet transforms and a Brownian Bridge diffusion process. The authors leverage multi-scale wavelet transforms to efficiently model low- and high-frequency image components, significantly improving the generation quality and speed of image restoration compared to traditional diffusion models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. MSCGM\\u2019s wavelet-based decomposition and conditional modeling shows substantial improvements in sampling speed and better reconstruction quality.\\n2. By adapting the generative approach to frequency characteristics, MSCGM enhances detail in restored images, especially in high-frequency components crucial for microscopy images.\\n3. The authors presented a new loss function.\", \"weaknesses\": \"1. Equation 18 combines multiple objectives\\u2014L2 loss, Structural Similarity Index Measure (SSIM), and Wasserstein distance\\u2014but the rationale behind each component\\u2019s inclusion is not fully explained. Additionally, the roles and relative importance of the scaling parameters \\u03bb, \\u03bd, and \\u03b1 are unclear.\\n2. The training procedure for MSCGM is not explicitly described. Unlike the clear training steps outlined for BBDP, MSCGM lacks a step-by-step description of its training pipeline. \\n3. While Table 1 compares MSCGM with other models in terms of PSNR, SSIM, and sampling time, it does not include training time or the number of trainable parameters for each method. Without these metrics, it is challenging to gauge MSCGM\\u2019s overall computational cost relative to other approaches. Including such details would provide a more comprehensive view of the model\\u2019s efficiency.\\n4. In Section 4.2, the authors state that FID is considered as an evaluation metric. However, this metric is not included in Table 1. As FID is widely used in assessing generative models for image quality, its inclusion would offer further insights into MSCGM\\u2019s performance in distributional similarity to real images.\\n5. Equations from 4 to 15 are borrowed from BBDP paper. It is better to include them under the Preliminaries section.\", \"questions\": \"1. Could the authors provide more detailed explanations regarding the choice and role of each loss term in Equation 18 and explain how they determined the relative weighting (\\u03bb, \\u03bd, \\u03b1 values) between the terms.\\n2. Could the authors provide a comparison of training time and the number of training parameters for MSCGM versus other models?\\n3. Could the authors to provide a detailed algorithm or pseudocode for MSCGM training, similar to what they provided for BBDP Algorithm.\", \"flag_for_ethics_review\": \"['Yes, Other reasons (please specify below)']\", \"details_of_ethics_concerns\": \"The anonymity of the authors is compromised, as this paper is available on arXiv at https://arxiv.org/abs/2407.05259.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your thoughtful and constructive feedback.\", \"comment\": \"We thank the reviewer for this valuable point.\\n\\nIn microscopy image super-resolution, for example, Figure 3 and 4, the input and target images are actually captured by two different imaging modalities and belong to two distinct image domains. Therefore a structure like BBDP instead of DDPM is essential to establish mappings between the two domains. The super-resolution experiments on natural images are presented to show the versatility of our method and provide comparison against existing baselines.\", \"here_we_provide_a_ablation_study_that_replaces_wt_with_a_simple_down_sampling_method_on_div2k_dataset\": \"| Methods | PSNR (dB) \\u2191 (DIV2K) | PSNR (dB) \\u2191 (Set5) | PSNR (dB) \\u2191 (Set14) | SSIM \\u2191 (DIV2K) | SSIM \\u2191 (Set5) | SSIM \\u2191 (Set14) |\\n|----------|---------------------|---------------------|-------------------|--------------------|----------------|---------------|\\n| MSCGM (simple down-sampling method) | 30.73| 31.28 | 29.67 | 0.65 | 0.78 | 0.65 |\\n| MSCGM | **31.66** | **32.33** | **30.79** | **0.72** | **0.85** | **0.71** |\\n\\nThe inferior performance of MSCGM with simple down-sampling is expected as the high-frequency signals are lost during down-sampling. And it makes GAN much harder to reconstruct a high-quality image with rich details.\\n\\nThank you again for your hard work. If you have any additional questions or concerns, please don\\u2019t hesitate to reach out, we will answer them in time!\"}",
"{\"summary\": \"The authors present a novel multi-scale generative model that leverages the Brownian Bridge process within the wavelet domain. This approach enhances inference speed while maintaining high image quality and diversity. The model is further integrated with computational microscopy workflows, expanding its applicability. The authors evaluate its performance on both microscopy and natural image datasets, demonstrating that it achieves slightly better results compared to existing methods such as IR_SDE, Refusion, and BBDM, with the added advantage of faster inference.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"\\u2022 Integrating the Brownian Bridge Diffusion Process (BBDP) with adversarial learning in a multi-scale wavelet diffusion framework is innovative, enhancing image quality and stability.\\n\\n\\u2022 The model achieves notable speed improvements, delivering faster inference without sacrificing image quality. \\n\\n\\u2022 Performance remains consistent across diverse experiments, demonstrating robustness on both microscopy and natural images.\", \"weaknesses\": \"\\u2022 The paper lacks a clear motivation for applying this model to computational microscopy workflows. The rationale for this specific application is unclear and lacks context, the relevance to microscopy appears out of place. A discussion on how this functionality benefits microscopy would help justify this direction and clarify its practical utility.\\n\\n\\u2022 The primary advantage of this method is its reduced inference time; however, the paper lacks a direct comparison with other methods that similarly aim to improve efficiency. Including such a comparison would provide valuable context and help quantify the benefits more clearly.\\n\\n\\u2022 The general evaluation lacks depth and is missing ablation studies. \\n\\n\\u2022 There appear to be configuration issues with the comparison methods. For instance, IR-SDE [1] is cited as requiring 100 steps, but the authors use 1000, which significantly prolongs inference time. With the correct configuration (100 steps), the inference time should drop from 32 seconds to approximately 3 seconds.\\n\\n\\u2022 The choice of metrics is limited and somewhat inadequate for a super-resolution task. Relying solely on PSNR and SSIM may overlook important aspects of image quality. Including pixel-based metrics would provide a more comprehensive evaluation and might show shortcomings of the proposed method.\\n\\n\\n[1] Luo, Ziwei, et al. \\\"Image restoration with mean-reverting stochastic differential equations.\\\" arXiv preprint arXiv:2301.11699 (2023).\", \"questions\": \"Especially considering that inference time is one of the main benefits, why was it not compared to models with fewer step counts or at least an in-depth analysis of how step counts influence the SOTA model performance? E.g. [2], or other methods that can be applied to the problem domain?\\n\\n[2] Phung, Hao, Quan Dao, and Anh Tran. \\\"Wavelet diffusion models are fast and scalable image generators.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposed a multi-scaled generative model that uses a diffusion model (DM) for low-frequency image and a GAN for high frequency images. The wavelet transform provides multi-scale images without lossy encoding process. The lossless compression is particularly important for microscopic imaging where high-frequency component are sparse and non-Gaussian. Additionally, the authors showed the near-Gaussian property of low-frequency component and thus employed Brownian Bridge Diffusion Process (BBDP). The idea of employing different networks (DM and GAN) to different resolutions according to the characteristics of microscopic dataset is novel. The proposed MSCGM (multi-scale conditional generative model) showed improved super-resolution result with fast inference time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper analyzed the characteristics of microscopic images and proposed adequate methodology to address the sparsity and non-Gaussianity. Since the wavelet transformation divides the image into two subbands (high- and low- frequency coefficients) losslessly, handling each subband in a different manner is original.\\nThe contribution of the work is clear and well demonstrated. \\nIn addition, the work could be further applied to different modality images where sparse or non-Gaussianity exist.\", \"weaknesses\": \"Although the idea of the paper is novel, the effectiveness of the work has not been thoroughly assessed. The use of WT and the superiority of the proposed method compared to conventional method should be further evaluated. The specific comments are described in Questions.\", \"questions\": \"The paper demonstrated that the low-frequency coefficients in higher scales show Gaussian tendency and thus applied this to BBDP. The idea is novel and well hypothesized, but it would be helpful if other DM methods, such as IR-SDE and ReFusion methods that are implemented on 4x super-resolution experiment, are also tested on microscopy image dataset. Only CMSR (GAN: non-diffusion model), is compared at the moment, not showing the effectiveness of proposed near-Gaussianity assumption.\\nSimilarly, applying BBDM to full resolution image does not seem to be fair comparison. Since many works demonstrated the effectiveness of multi-scale diffusion models, BBDM should be implemented in a same manner as the proposed method to prove the superiority of WT instead of other compression technique. Please conduct an ablation study that replaces WT with simple down-sampling.\\nIs there any specific reason why the proposed work adopted BBDM which was initially designed for image translation where input and target domains are different? Super-resolution tasks seem to have similar domains for input and target. Justify the choice of BBDM for super-resolution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your thoughtful and constructive feedback.\", \"comment\": \"We sincerely thank you for your careful review and insightful comments. We have provided a point-to-point response to each of your concerns below:\\n\\n>**Motivation:** The paper lacks a clear motivation for applying this model to computational microscopy workflows. The rationale for this specific application is unclear and lacks context, the relevance to microscopy appears out of place. A discussion on how this functionality benefits microscopy would help justify this direction and clarify its practical utility.\\n\\n**A:** Generative models such as GANs have been successfully and widely used in microscopy imaging applications. It is important to integrate the existing state-of-art diffusion models and GANs on these datasets and advance AI-enabled microscopy imaging.\\n\\n>**Direct comparison:** The primary advantage of this method is its reduced inference time; however, the paper lacks a direct comparison with other methods that similarly aim to improve efficiency. Including such a comparison would provide valuable context and help quantify the benefits more clearly.\\n\\n**A:** Our work proposes an innovative multi-scale conditional generative model for conditional image generation and translation tasks, and validates it on multiple datasets, including natural images and microscopic images. We mainly demonstrate a fundamental implementation, which is similar to DDPM, and existing acceleration techniques, including DDIM and pseudo numerical methods can be directly applied to the sampling process to improve its efficiency.\\n\\n>**Ablation studies:** The general evaluation lacks depth and is missing ablation studies.\\n\\n**A:** Thank you for your suggestion. We have added the ablation study results of the experiment mentioned in this article. We remove the wavelet transform in the model and only conduct the experiment in the time domain.\\n\\n| Methods | PSNR (dB) \\u2191 (DIV2K) | PSNR (dB) \\u2191 (Set5) | PSNR (dB) \\u2191 (Set14) | SSIM \\u2191 (DIV2K) | SSIM \\u2191 (Set5) | SSIM \\u2191 (Set14) |\\n|----------|---------------------|---------------------|-------------------|--------------------|----------------|---------------|\\n| MSCGM (without wavelet) | 30.13 | 31.01 | 28.99 | 0.68 | 0.76 | 0.63|\\n| MSCGM | **31.66** | **32.33** | **30.79** | **0.72** | **0.85** | **0.71** |\\n\\n\\n\\n>**Configure issue:** There appear to be configuration issues with the comparison methods. For instance, IR-SDE [1] is cited as requiring 100 steps, but the authors use 1000, which significantly prolongs inference time. With the correct configuration (100 steps), the inference time should drop from 32 seconds to approximately 3 seconds.\\n\\n**A:** Sample steps of 1000 were employed for fair comparison. As discussed above, the same acceleration strategy, including a smaller total sample steps can be equivalently applied during the training and/or sampling of our method to reach the same reduction of time.\\n\\n>**Metrics:** The choice of metrics is limited and somewhat inadequate for a super-resolution task. Relying solely on PSNR and SSIM may overlook important aspects of image quality. Including pixel-based metrics would provide a more comprehensive evaluation and might show shortcomings of the proposed method.\\n\\n**A:** The paper referred here by Luo et al [1]. applies the same metrics PSNR and SSIM for the super-resolution task, as reported in Table 1 in our paper. We added the information about the number of model parameters in the revision as further supplement.\\n\\n>**Compare with different steps:** Especially considering that inference time is one of the main benefits, why was it not compared to models with fewer step counts or at least an in-depth analysis of how step counts influence the SOTA model performance? E.g. [2], or other methods that can be applied to the problem domain?\\n\\n**A:** Thank you for the question, Figure 12 and 14 in appendix G.4 shows the sampling results of our method (MSCGM) and BBDM with various sampling steps from 4 to 1000. This intuitively explains the performance of the model under different sample steps. Since the underlying logic of our implementation is different from that of wavelet diffusion [2], we cannot simply apply a different number of steps for comparison, which is relatively unfair.\\n\\nThank you again for your valuable feedback and suggestions. If you have any additional questions or concerns, please don\\u2019t hesitate to reach out. If our responses have addressed most of your concerns, we kindly ask if it might improve your evaluation of our manuscript. Your support is greatly appreciated!\\n\\n\\n*[1] Luo, Ziwei, et al. \\\"Image restoration with mean-reverting stochastic differential equations.\\\" arXiv preprint arXiv:2301.11699 (2023).*\\n\\n*[2] Phung, Hao, Quan Dao, and Anh Tran. \\\"Wavelet diffusion models are fast and scalable image generators.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.*\"}",
"{\"title\": \"Cont.\", \"comment\": \">**Lack of Dataset Information:** The results section includes evaluations of microscopic images, but there\\u2019s no description of the dataset. Is it public or private? What is the image count? Without these details, readers cannot analyze or reproduce the results. Please provide a detailed description of the microscopic image dataset used, including its source, size, and any preprocessing steps applied.\\n\\n**A:** Thank you for your concern. However, a detailed description of our dataset and parts of the dataset are already provided in Appendix I. You can check the link we provided.\\n\\n>**Insufficient Ablation Studies:** Results provide only a simple comparison with LDM, without deeper exploration of MSCGM\\u2019s components or ablation studies to justify the performance benefits of each module.\\n\\n**A:** Thank you for your suggestion. We have added the ablation study results of the experiment mentioned in this article. We remove the wavelet transform in the model and only conduct the experiment in the time domain and use the different down-sampling method. Results are provided as below:\\n\\n| Methods | PSNR (dB) \\u2191 (DIV2K) | PSNR (dB) \\u2191 (Set5) | PSNR (dB) \\u2191 (Set14) | SSIM \\u2191 (DIV2K) | SSIM \\u2191 (Set5) | SSIM \\u2191 (Set14) |\\n|----------|---------------------|---------------------|-------------------|--------------------|----------------|---------------|\\n| MSCGM (without wavelet) | 30.13 | 31.01 | 28.99 | 0.68 | 0.76| 0.63|\\n| MSCGM (simple down-sampling method) | 30.73| 31.28 | 29.67 | 0.70 | 0.78 | 0.65 |\\n| MSCGM | **31.66** | **32.33** | **30.79** | **0.72** | **0.85** | **0.71** |\\n\\n>**Unconvincing Model Performance:** The model\\u2019s performance requires further validation through comparison with advanced models. Numerous diffusion-based image restoration models from 2024 exist, yet none are used for comparison. This weakens the paper\\u2019s credibility. Key diffusion-based image restoration works worth considering include:\\n\\n**A:** Due to the limitation of running resources, we may not be able to verify all the proposed models one by one, but our experiments show that we perform best on this special type of microscopic images with high details and sparseness. Our experiments mainly provide support and verification for the theory of multiscale conditional generative modeling, which is a basic model that can be applied to more advanced model structures. In addition, for more experiments details, please refer to Appendix G.3 and the result table in the article. At the same time, the outstanding advantage of our model is fast sampling. In this regard, our model performs much better than other models.\\n\\nWe hope our answers can answer your doubts and concerns. If you have further questions, please do not hesitate to ask us. We will answer all your questions in a timely manner. Thanks again for your hard work!\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you for addressing the questions I raised.\\nI understand the use of BBDP instead of DDPM because a different imaging technique was used for the input and target images.\\nAs the author mentioned, simple down-sampling drops quite a lot of information. Therefore, pixel-shuffling, as applied in \\u201cMSSNet: Multi-Scale-Stage Network for Single Image Deblurring,\\u201d should be used for a fair comparison.\\nThe table shows that the MSCGM is better with simple down-sampling against WT. Is it correct?\"}",
"{\"metareview\": \"This work introduces a multi-scale generative model to enhance conditional image restoration by initiating the Brownian Bridge diffusion process specifically at the lowest-frequency subband and applying generative adversarial networks at subsequent multi-scale high-frequency subbands in the wavelet domain.\", \"additional_comments_on_reviewer_discussion\": \"This work has four reviewers. Two reviewers are positive to accept it, while the other two reviewers are negative to accept it. And the final ratings after rebuttal are 6, 6, 3, and 5. After checking the comments and the author's responses, I find that this work has many weaknesses about unclear motivations, clarification issues, unconvinced experiments without comparisons against SOTA diffusion-based methods, missed ablation studies, and so on. In this regard, this work can not be accepted in ICLR 2015.\"}",
"{\"title\": \"Thank you for your through review and encouraging feedback.\", \"comment\": \"We would like to thank you for acknowledging the novelty and the significance of our work. We now take the opportunity to clarify the raising concerns:\\n>**Training loss explanation:** Could the authors provide more detailed explanations regarding the choice and role of each loss term in Equation 18 and explain how they determined the relative weighting (\\u03bb, \\u03bd, \\u03b1 values) between the terms.\\n\\n\\n**A:** These loss terms have been widely used in previous works, such as:\\n\\n\\n1. [Zhang H, et al. High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network. Biomed Opt Express. 2019 Feb 4;10(3):1044-1063. doi: 10.1364/BOE.10.001044.]\\n\\n\\n2. [Yilin Luo, et al.Single-Shot Autofocusing of Microscopy Images Using Deep Learning ACS Photonics 2021 8 (2), 625-638 DOI: 10.1021/acsphotonics.0c01774]\\n\\n\\n3. [Xu, Hao, et al. \\\"Microscopic image augmentation using an enhanced WGAN.\\\" The fourth international symposium on image computing and digital medicine. 2020.]\\n\\n\\nAnd parameters vary for each experiment and are generally determined empirically. To better clarify the functions of each loss term, we have revised the Methods section and add a sentence above the loss definition, cited below:\\n\\n\\n*\\u201cWe adopted a pixel-wise L2 loss and a structural similarity index loss to penalize local and global mismatch, respectively.\\u201d*\\n\\n\\n>**Training procedure description:** The training procedure for MSCGM is not explicitly described. Unlike the clear training steps outlined for BBDP, MSCGM lacks a step-by-step description of its training pipeline.\\n\\n\\n**A:** We thank the reviewer for this valuable suggestion. We have added the detailed description of the training procedure of MSCGM in the appendix.\\n\\n\\n>**Adding more metrics:** Could the authors provide a comparison of training time and the number of training parameters for MSCGM versus other models?\\n\\n\\n**A:** We utilized publicly available implementations of IR-SDE and ReFusion. The training time was not provided. But we have revised Table 1 to include the comparison on trainable parameters for the four models.\\n\\n\\n\\n\\n>**FID:** In Section 4.2, the authors state that FID is considered as an evaluation metric. However, this metric is not included in Table 1. As FID is widely used in assessing generative models for image quality, its inclusion would offer further insights into MSCGM\\u2019s performance in distributional similarity to real images.\\n\\n\\n**A:** For image restoration tasks such as image super-resolution tasks, PSNR and SSIM are more commonly used metrics to evaluate the similarity between outputs and ground truth images. In contrast, FID focuses on measuring the distribution similarity between output and target images. We also report FID metrics for some tasks in the appendix.\\n\\n\\n>**Reintegrate content:** Equations from 4 to 15 are borrowed from BBDP paper. It is better to include them under the Preliminaries section.\\n\\n\\n**A:** We really thank the suggestions from the reviewer. However, we found merging section 3.1 and 3.2 makes the preliminaries too prolonged. In order to explain the content more clearly, we decided to separate them.\\n\\n\\n>**Detailed algorithm:** Could the authors to provide a detailed algorithm or pseudocode for MSCGM training, similar to what they provided for BBDP Algorithm.\\n\\n\\n**A:** Thank you for the heads up, we have included the pseudocode in the appendix.\\n\\n\\n\\n\\nWe hope our answers can answer your doubts and concerns. If you have further questions, please do not hesitate to ask us. We will answer all your questions in a timely manner. Thanks again for your hard work!\"}",
"{\"title\": \"Cont.\", \"comment\": \">**Figure 1 Illustration Issues:** The paper title focuses on \\\"Microscopic Image Restoration,\\\" yet Figure 1 uses natural images. Including examples of microscopic images to show the degradations introduced by LDM and Refusion compared to MSCGM would enhance clarity.\\n\\n**A:** Thank you for your insightful comment regarding Figure 1. We chose to use natural images in this figure because they effectively demonstrate the significant differences between wavelet transform and the other two methods. While using microscopic images is feasible, they do not clearly illustrate the lossless property of the wavelet transform. Moreover, our intention with Figure 1 was just to provide a simple illustration of the pattern. (LDM and Refusion were not designed for microscopy data and no public implementation was available) We appreciate your suggestion and will consider it in future presentations to enhance clarity.\\n\\n>**Methodology Development Clarity:** The description of the wavelet transform on page 4 is overly general, with key details moved to the appendix. Clear explanations of any novel model designs or algorithmic adaptations should be provided in the main text.\\n\\n**A:** Thank you for your suggestion. We will move the novel model designs or algorithmic adaptations in the appendix to the main text.\\n\\n>**Quality of Mathematical Presentation:** Symbols in the equations are used without proper declarations or explanations. Inconsistent symbols, like the variable for the normal distribution ( N ), further detract from clarity.\\n\\n**A:** Thank you for pointing out this problem. We have checked the consistency of the equations in the revision and corrected the inconsistencies.\\n\\n>**Algorithm 1 Lack of Context:** Algorithm 1 on page 5 is underdeveloped. Symbols are not defined before use, and the algorithm lacks defined input-output requirements.\\n\\n**A:** We added more explanatory text.\\n\\n>**Figure 2 Diagram Confusion:** Figure 2 is difficult to interpret. The illustration doesn\\u2019t clearly label network modules, workflow processes, or shared parameters (only a line is shown), which fails to clarify the model structure effectively.\\n\\n**A:** Modify Fig.2 to add a GAN module. It is common practice to omit the tedious model structures but elucidate them in the Methods section.\"}",
"{\"title\": \"Thank you for your through review and encouraging feedback.\", \"comment\": \"We thank you for your thorough review and constructive comments on our work. In response to your questions/concerns, we have provided detailed answers below.\\n\\n>**Lack of Consistency:** The paper lacks organization and clarity. Although the title emphasizes \\\"Microscopic Image Restoration,\\\" the experiments primarily focus on \\\"Natural Image Super-resolution\\\" and \\\"Low-light Natural Image Enhancement.\\\" Only a small subset of results explores microscopic images. If the model is intended for general image restoration, it would be more accurate to propose it as a \\u2018unified image restoration\\u2019 model. I suggest the authors either refocus their experiments more heavily on microscopic image restoration to align with the title, or broaden the title to reflect the wider scope of image restoration tasks covered in the paper.\\n\\n**A:** Our theory and model are mainly aimed at microscopic images with more details but sparse image contents. The applications on natural image datasets are shown for two reasons: (1) on these tasks we have pre-trained baselines to establish solid a comparison between our method and previous state-of-the-art methods, (2) to demonstrate the adaptability of our method on non-sparse, general image restoration tasks such as natural image super-resolution. \\n\\nBesides, the two microscopic image datasets involved in this work are representative as they (1) are captured by a typical super-resolution optical microscopy (stimulated emission depletion microscopy) for the HR images and a diffraction-limited microscopy (confocal) for the LR images, (2) contain typical sparse samples with distinct features, including simple samples like nano-beads and relatively complex samples like HeLa cells. The main focus of this article is still on the super-resolution task of microscopic images. \\n\\n>**Introduction Needs Refinement:** The introduction lacks a clear problem definition and research motivation. The first two paragraphs provide a broad overview of diffusion processes that diverges from the paper\\u2019s focus. The discussion on latent diffusion downsampling is a well-known issue and could be alleviated by higher resolutions. The authors should clearly articulate why microscopic images especially require the multi-scale wavelet transform in the introduction. Please include a discussion of how their approach compares to or builds upon these existing wavelet-based diffusion models in the Introduction, highlighting any key differences or improvements.\\n\\n**A:** Our introduction strictly focuses on the microscopic image restoration tasks, discusses and compares existing methods, including GAN and diffusion models, the motivations of this work, especially why we design MSCGM specifically for microscopy data, is elucidated in the second to last paragraph of the Introduction section, cited below:\\n\\u201cOn the other hand, \\u2026\\u201d\\nThe advantages of our method over existing ones are summarized in the last paragraph of the Introduction section, cited below:\\n\\u201c\\u201d\\nTo the best of our knowledge, related methods like wavelet-based DMs were not designed for or demonstrated on microscopy data, therefore we believe such a comparison or claim in the Introduction is improper. Detailed discussion on related works such as wavelet-based DM are elucidated in the \\u201cRelated Works\\u201d section.\\n\\n\\n>**Lack of Acknowledgement of Prior Work:** The paper does not credit previous studies applying wavelet transforms in diffusion models, which could mislead readers into believing the concept originated here. Papers like \\\"Wavelet Diffusion Models are Fast and Scalable Image Generators (CVPR 2023)\\\" and \\\"Training Generative Image Super-Resolution Models by Wavelet-Domain Losses Enables Better Control of Artifacts (CVPR 2024)\\\" are directly related and should be cited with comparisons to clarify this study\\u2019s contributions.\\n\\n**A:** Thank you for mentioning these two previous related works. We have already cited the first work in line 164 that you have mentioned and added the second work in the related work part of our revision. Thank you again for your suggestions, which help improve the completeness of our paper.\"}"
]
} |
1959usnw3Z | Chordal Graph Sampling-Based Mini-batch Training Algorithm for Large Graphs | [
"Su Ziyang",
"Wentao He",
"Jiayuan Lew",
"Guanjie Zheng"
] | Graph Neural Networks (GNNs) are powerful models for learning representations of attributed graphs. To scale GNNs to large graphs, many methods use various techniques, such as sampling and decoupling, to alleviate the “neighbor explosion” problem during mini-batch training. However, these sampling-based mini-batch training methods often suffer from greater information loss than decoupling-based methods or full-batch GCNs. Besides, most original segmentation methods for large graphs usually lose a large number of edges, resulting in suboptimal performance when performing mini-batch training. Therefore, we propose a Chordal Graph Sampling-based mini-batch Training algorithm for GNNs on large-scale graph datasets, called CGST. CGST includes a balanced chordal graph partition module and a batch random aggregation module to improve performance on node classification tasks while maintaining main information of the original graph structure. Experiments on three large-scale graph datasets prove the effectiveness of CGST. | [
"Large scale dataset",
"Graph neural networks"
] | https://openreview.net/pdf?id=1959usnw3Z | https://openreview.net/forum?id=1959usnw3Z | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"sbSYu9ZzZT",
"roD7nBVnju",
"o95d7ciJCi",
"nKQpgHzTwx",
"JyUTi7ZyQL"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730091306776,
1729869807993,
1730869590015,
1731571754434,
1730261685896
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10492/Reviewer_eqDf"
],
[
"ICLR.cc/2025/Conference/Submission10492/Reviewer_XWcU"
],
[
"ICLR.cc/2025/Conference/Submission10492/Reviewer_Nmtu"
],
[
"ICLR.cc/2025/Conference/Submission10492/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10492/Reviewer_jFzN"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes CGST. CGST includes a balanced chordal graph partition module and a batch random aggregation module to improve performance on node classification tasks while maintaining main information.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"S1: The scalability of GNNs is an important research problem.\", \"s2\": \"The format looks fine.\", \"weaknesses\": \"W1: Apart from the graph partition method, I don't see any difference between this paper and ClusterGCN. And the graph partition method is adopted from existing work.\", \"w2\": \"The experiment setting is strange. The authors do not use common graph datasets. The results are also unsatisfied, as a scalable method, CGST performs poorly both in terms of memory and time.\", \"w3\": \"Parts of this paper were written in LLM, e.g., Line 321, what is \\\"cluster graph spatial transformer\\\"?\", \"questions\": \"Q1: Please discuss the difference between your paper and ClusterGCN.\", \"q2\": \"In Line 94, \\\"Under extensive experiments on four real-world datasets...\\\" , where are the fourth dataset?\", \"q3\": \"Section 2.3 has a title of \\\"GNN decoupling\\\", but the main text is about attention and skip-connection. How these concepts are related to GNN decoupling?\", \"q4\": \"In Line 359, \\\"We select six baselines to evaluate the performance...\\\", where are the sixth baseline?\", \"q5\": \"In Line 388, \\\"Codes are available at...\\\", there is no implementation code in this link, here is the tex source. It is normal not to provide the code during the review stage, but please do not deceive the reviewers.\", \"q6\": \"In Line 517, \\\"Case study...\\\", this is not case study.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes a GNN training technique based on subgraph sampling, which is based on chordal subgraph partition. The authors tested the performance of CGST training on GCN across three large graphs.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. Experiments are conducted on three new datasets, which is inconsistent with previous work. Testing on new datasets is commendable.\\n\\n2. This paper is easy to understand.\", \"weaknesses\": \"1. In the introduction, the figure 1 is inappropriate. Methods like CLUSTER-GCN and GAS use METIS to partition graphs, which does not result in some nodes being removed and unable to appear in the training batches.\\n2. The author frequently mentions that chordal subgraph partition is a major contribution, but I notice that the work in section 4.2 originates from [1]. This significantly undermines the novelty and amount of work in this paper. The author should provide an accurate explanation and description of this.\\n3. There are significant problems in the experimental section of the paper, which completely fails to meet the acceptance standards of ICLR. The author should provide experimental results for a variety of GNNs, not just limit to GCN. In terms of experimental results, CGST is also not ideal in terms of Mem usage and training time. Moreover, the author should provide experimental results for commonly used datasets, such as Products.\\n\\nOverall, I think this paper has significant deficiencies, especially in the experimental section.\\n\\n[1] Jungho Ahn,Lars Jaffke,O-joung Kwon, and Paloma TLima. Well-partitionedchordalgraphs. Discrete Mathematics.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper builds on cluster-gcn, using chordal graph partition instead of the metis algorithm. The performance of CGST was tested on three large-scale datasets. Overall, the novelty of this paper is limited, and its performance is relatively average.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper is easy to understand.\\n\\n2. Experiments are conducted on three new datasets.\", \"weaknesses\": \"1. The novelty of this paper is extremely limited. The main difference from Cluster-GCN is the use of a different graph partition algorithm. Moreover, the graph partition algorithm used in this paper is not original. Additionally, the random aggregation technique mentioned in section 4.3 is also used by Cluster-GCN. The only difference is that edges between different clusters have been removed.\\n2. The experimental results indicate that CGST's performance is suboptimal. Although the accuracy is sufficiently good, as a work on scalable training, the memory usage and training time performance are worse than the baselines.\\n3. This paper does not discuss any work related to scalable training from 2022 to 2024.\\n4. This paper contains many typos.\\n5. This paper does not compare with baselines on standard datasets.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper focuses on training GNNs on large graphs, and proposes to separate the whole graph into several balanced chordal graph. The authors try to maintain main information of the original graph structure.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The organization is good to follow.\\n2. The authors find a new potential of training large-scale graph.\", \"weaknesses\": \"1. Potential violation of double-blind policy. For the link at line 387, 'LICENSE' contains \\\"Copyright (c) 2024 Su ziyang\\\", where \\\"Su ziyang\\\" is a name implying one of the authors. Besides, this link contains the LATEX file of the submission not the code.\\n2. Section 3 is about one page, but only contains some well-known knowledge.\\n3. As shown in Figure 1, the authors argue that previous mini-batch methods suffer from information loss because of removed nodes and edges. However as shown in Figure 2, the proposed model also does not consider the nodes between different cliques.\\n4. The baselines are too old, where the authors do not provide the citation for SAGN.\\n5. As shown in Table 1, the proposed model cannot achieve the best memory usage and training time in all three datasets. Considering this paper studies large-scale training, these two metrics are very important.\\n6. I strongly suggest the authors further check the writing:\\n- Section 3 \\\"PREMILINARY\\\"\\n- What's $\\\\mathcal{O}$ in Definition 2.\", \"questions\": \"1. In Algorithm 1, how to get the input clique tree? What's the complexity to construct such a tree? Considering to arbitrarily select an edge at each epoch, how to guarantee balanced partition?\\n2. What are the strengths to partition a graph into balanced chordal graph over other balanced partition?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
17idjbdHVW | A Computation and Communication Efficient Projection-free Algorithm for Decentralized Constrained Optimization | [
"Tong He",
"Hao Di",
"Haishan Ye",
"Xiangyu Chang",
"Guang Dai",
"Ivor Tsang"
] | Decentralized constrained optimization problems arise in numerous real-world applications, where a major challenge lies in the computational complexity of projecting onto complex sets, especially in large-scale systems.
The projection-free method, Frank-Wolfe (FW), is popular for the constrained optimization problem with complex sets due to its efficiency in tackling the projection process.
However, when applying FW methods to decentralized constrained finite-sum optimization problems, previous studies provide suboptimal incremental first-order oracle (IFO) bounds in both convex and non-convex settings.
In this paper, we propose a stochastic algorithm named Decentralized Variance Reduction Gradient Tracking Frank-Wolfe ($\texttt{DVRGTFW}$), which incorporates the techniques of variance reduction, gradient tracking, and multi-consensus in the FW update to obtain tight bounds.
We present a novel convergence analysis, diverging from previous decentralized FW methods, and demonstrating $\tilde{\mathcal{O}}(n+\sqrt{\frac{n}{m}}L\varepsilon^{-1})$ and $\mathcal{O}(\sqrt{\frac{n}{m}}L^2\varepsilon^{-2})$ IFO complexity bounds in convex and non-convex settings, respectively.
To the best of our knowledge, these bounds are the best achieved in the literature to date. Besides, in the non-convex case, $\texttt{DVRGTFW}$ achieves $\mathcal{O}(\frac{L^2\varepsilon^{-2}}{\sqrt{1-\lambda_2(W)}})$ communication complexity which is closed to the lower bound $\Omega(\frac{L\varepsilon^{-2}}{\sqrt{1-\lambda_2(W)}})$.
Empirical results validate the convergence properties of $\texttt{DVRGTFW}$ and highlight its superior performance over other related methods. | [
"Decentralized stochastic optimization",
"variance reduction",
"Frank-Wolfe method"
] | https://openreview.net/pdf?id=17idjbdHVW | https://openreview.net/forum?id=17idjbdHVW | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yxkD44Zebm",
"vE6vvK3cHs",
"mOwEUI71Ew",
"kRfp086ZQ5",
"ebV3w9ZjAh",
"cUj0j48SKE",
"XWrZYMpjvW",
"SauDp3Zw0V",
"PzeeRklWHT",
"JBRoNExWWE",
"9CBmqvS3Dy",
"98fwTz2iiV",
"7hT15WNDXM",
"3c4GJqPPnD"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_comment"
],
"note_created": [
1732934646804,
1732856962660,
1732889743610,
1730405403602,
1731579774466,
1732366311517,
1729017745668,
1732898152748,
1732854946616,
1730506557898,
1732709881356,
1731579405025,
1733282052548,
1731579715552
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13773/Reviewer_jSAK"
],
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13773/Reviewer_jSAK"
],
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13773/Reviewer_yJBK"
],
[
"ICLR.cc/2025/Conference/Submission13773/Reviewer_jSAK"
],
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13773/Reviewer_CzXL"
],
[
"ICLR.cc/2025/Conference/Submission13773/Reviewer_jSAK"
],
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13773/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"The learning rate was not proposed by [1], but rather it was introduced by [2] in 2019, which we have clearly stated in the article, with no intention of obscuring this fact. Furthermore, regarding the article you referenced, it seems like a \\\"trivial\\\" extension from unconstrained setting algorithm [3,4].\\n\\nReferences\\n\\n[1] Beznosikov, Aleksandr, David Dobre, and Gauthier Gidel. \\\"Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features.\\\" Forty-first International Conference on Machine Learning.\\n\\n[2] Stich, Sebastian U. \\\"Unified optimal analysis of the (stochastic) gradient method.\\\" arXiv preprint arXiv:1907.04232 (2019).\\n\\n[3] Nguyen, Lam M., et al. \\\"SARAH: a novel method for machine learning problems using stochastic recursive gradient.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. 2017.\\n\\n[4] Li, Z., Bao, H., Zhang, X., & Richt\\u00e1rik, P. (2021, July). PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization. In International conference on machine learning (pp. 6286-6295). PMLR.\"}",
"{\"comment\": \"It is trivial to extend the unconstrained method (https://arxiv.org/pdf/2210.13931) to the constrained setting. Given the bounded domain, one only needs to simplify the proof in that paper to establish the convergence rate of the decentralized Frank-Wolfe method.\"}",
"{\"comment\": \"Regarding the decentralized variance-reduction type algorithms, it is indeed common practice to employ similar techniques for bounding the consensus error. In our work, we utilize a Page algorithm like [1], which means certain aspects of our proof may appear familiar. However, there are significant distinctions that set our approach apart.\\n\\nFirstly, existing literature on decentralized unconstrained optimization typically employs a constant step size. In contrast, our paper adopts a diminishing step size of $\\\\mathcal{O}(\\\\frac{1}{t})$. This choice necessitates a different treatment in the proof, leading us to establish the iterative process $\\\\mathbb{E}[\\\\phi\\\\_{t+1}]\\\\leq \\\\max(1-\\\\frac{p}{2},1-\\\\frac{\\\\eta}{2})\\\\ldots$, contrast to the $\\\\mathbb{E}[\\\\phi_{t+1}]\\\\leq\\\\mathbb{E}[\\\\phi_t-\\\\frac{2}{\\\\eta}\\\\||\\\\nabla f(\\\\bar{x}_t)\\\\||^2-\\\\frac{8m\\\\eta}{3}]$ seen in [1], this adjustment is crucial for addressing the convex case effectively.\\n\\nSecondly, prior research on decentralized Frank-Wolfe algorithms [2, 3, 4] has relied on more intricate proof frameworks to demonstrate convergence. These frameworks, while more complex, have proven to be less efficient compared to our approach. Notably, although earlier work on decentralized unconstrained optimization [5, 6] has existed for some time, the previous decentralized Frank-Wolfe algorithm did not take advantage of them . Therefore, using a more concise and effective proof to achieve better results should be viewed as an advantage, not a disadvantage.\\n\\nReferences \\n\\n[1] Luo, Luo, and Haishan Ye. \\\"An optimal stochastic algorithm for decentralized nonconvex finite-sum optimization.\\\" arXiv preprint arXiv:2210.13931 (2022).\\n\\n[2] X. Jiang, X. Zeng, L. Xie, J. Sun and J. Chen, \\\"Distributed Stochastic Projection-Free Algorithm for Constrained Optimization,\\\" in IEEE Transactions on Automatic Control, doi: 10.1109/TAC.2024.3481040.\\n\\n[3] Hou, Jie, et al. \\\"Distributed momentum-based Frank-Wolfe algorithm for stochastic optimization.\\\" IEEE/CAA Journal of Automatica Sinica 10.3 (2022): 685-699.\\n\\n[4] Wai, Hoi-To, et al. \\\"Decentralized Frank\\u2013Wolfe algorithm for convex and nonconvex problems.\\\" IEEE Transactions on Automatic Control 62.11 (2017): 5522-5537.\\n\\n[5] Li, Boyue, Zhize Li, and Yuejie Chi. \\\"DESTRESS: Computation-optimal and communication-efficient decentralized nonconvex finite-sum optimization.\\\" SIAM Journal on Mathematics of Data Science 4.3 (2022): 1031-1051.\\n\\n[6] Xin, Ran, Usman A. Khan, and Soummya Kar. \\\"Fast decentralized nonconvex finite-sum optimization with recursive variance reduction.\\\" SIAM Journal on Optimization 32.1 (2022): 1-28.\"}",
"{\"summary\": \"This paper develops a decentralized stochastic Frank-Wolfe algorithm and establishes its convergence rate for both convex and nonconvex constrained problems. The experiment demonstrates the effectiveness of the proposed algorithm.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well written. It is easy to follow.\\n\\n2. The literature review is good.\", \"weaknesses\": \"1. The novelty is limited. Decentralized unconstrained optimization has been well studied. This paper tries to extend those algorithms to constrained problem, where the feasible set is bounded. However, this extension is trivial. In particular, due to the bounded feasible set, it is trivial to bound the gradient variance. Actually, the proof for frank-wolfe algorithm is much easier than the unconstrained counterpart.\\n\\n2. As mentioned in this paper, there are some existing decentralized Frank-wolfe algorithms for DR-submodular optimization problems. What is the difference between those algorithms and this paper? Are there any unique challenges compared to those algorithms? It would be good if the authors could discuss these critical points to show the contribution of this paper. \\n\\n3. FastMix is a not very common communication method. It would be good to provide some background for this method. For example, in standard gradient tracking method, it is well known that $\\\\bar{v}_t=\\\\bar{y}_t$. Does FastMix also have this property? It seems the authors directly use $\\\\bar{v}_t=\\\\bar{y}_t$ in the proof. \\n\\n4. It would be good to provide more details about the proof. For example, how to get the third step in Line 764? It is not very clear. \\n\\n5. How does the heterogeneity affect the convergence rate? \\n\\n6. Why does IFO not depend on the spectral gap? Any explanation?\", \"questions\": \"Please see Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"[15] Qu, Guannan, and Na Li. \\\"Accelerated distributed Nesterov gradient descent.\\\" IEEE Transactions on Automatic Control 65.6 (2019): 2566-2581.\\n\\n[16] Li, Huan, and Zhouchen Lin. \\\"Revisiting extra for smooth distributed optimization.\\\" SIAM Journal on Optimization 30.3 (2020): 1795-1821.\\n\\n[17] Ye, Haishan, et al. \\\"Multi-consensus decentralized accelerated gradient descent.\\\" Journal of Machine Learning Research 24.306 (2023): 1-50.\\n\\n[18] Liu, Yue, et al. \\\"Decentralized gradient tracking with local steps.\\\" Optimization Methods and Software (2024): 1-28.\\n\\n[19] Mokhtari, Aryan, Hamed Hassani, and Amin Karbasi. \\\"Conditional gradient method for stochastic submodular maximization: Closing the gap.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2018.\\n\\n[20] Sahu, Anit Kumar, Manzil Zaheer, and Soummya Kar. \\\"Towards gradient free and projection free stochastic optimization.\\\" The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.\"}",
"{\"comment\": \"If we have addressed your concerns, please consider raising the score, as the deadline is approaching.\"}",
"{\"summary\": \"In this paper, the authors proposed to combine the Frank-Wolfe algorithm with variance reduction as well as gradient tracking in the decentralized setting, resulting in the algorithm DVRGTFW. Convergence analysis in the convex and non-convex case are provided with numerical experiments conducted to further support the theory provided.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The author manages to combine the technique of variance reduction and gradient tracking to Frank-Wolfe algorithm in the decentralized setting, convergence analysis in both convex case and non-convex case are provided, illustrating the effectiveness of the proposed algorithm DVRGTFW.\\n\\n2. The proposed algorithm achieves best-known incremental first order oracle complexities both in the convex case and in the non-convex case, and near optimal communication complexity in the non-convex case.\\n\\n3. The paper offers numerical experiments to validate the theory presented in the paper.\", \"weaknesses\": \"1. Though the results are interesting, the proposed method appears to be primarily a combination of established techniques, such as variance reduction, gradient tracking, and the Frank-Wolfe algorithm. As a result, the novelty of the approach may be somewhat limited.\\n\\n2. If I am not mistaken, the communication complexity for DVRGTFW is not better than existing methods in the convex case given its extra dependence on $\\\\sqrt{mn}$ as it is demonstrated in Table 1, which is a limitation of the algorithm.\\n\\n3. I recommend that the authors do a thorough check of the paper as there are many typos, some of them are confusing, such examples include:\\n- At line 92, ''develop communication and communication efficient'';\\n- At line 114, $m = 0$;\\n- At line 222, $x_0 \\\\in \\\\mathbb{R}^d$,\\n- There are also some notations used without introduction in the paper.\\n\\n4. In some of the numerical experiments, the proposed algorithm is not better than existing algorithm for an unclear reason.\", \"questions\": \"1. In table 1, when $m = 1$, we should recover the complexities in the centralized setting in the convex/non-convex setting, however, for the proposed algorithm, the reviewer does not understand why it matches the bounds given in [Beznosikov et al., 2023], for example, in the convex case the table suggests $\\\\tilde{\\\\mathcal{O}}(n + \\\\frac{\\\\sqrt{n}}{\\\\varepsilon})$, while [Beznosikov et al., 2023] gives $\\\\tilde{\\\\mathcal{O}}(n + \\\\frac{1}{\\\\varepsilon})$.\\n\\n2. What is the output of Algorithm 2 FastMix? \\n\\n3. Is it possible to further improve the communication complexity of the algorithm so that it matches the optimal bounds?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"If you are referring to the learning rate in Theorem 1, it was first introduced in this paper: https://arxiv.org/pdf/2304.11737.\"}",
"{\"comment\": \"Dear Reviewer jSAK,\\n\\nThank you for your response.\\n\\nCould you please clarify your concerns regarding the novelty? The term \\\"novelty\\\" encompasses a broad range of meanings, and we would appreciate it if you could specify the particular aspects.\\n\\nIn our previous response, we have claimed that the decentralized constrained optimization problem is not a trivial extension of its unconstrained counterpart. The assumption of a bounded feasible set is fundenmental and common for the Frank-Wolfe method. \\nMoreover, prior studies [1, 2] have provided overly complex and cumbersome convergence analyses, yielding suboptimal bounds.\\n\\nIf you could further clarify what specific aspects of \\\"novelty\\\" you are referring to, we will provide detailed explanations.\\n\\nReferences\\n\\n[1] Wai, Hoi-To, et al. \\\"Decentralized Frank\\u2013Wolfe algorithm for convex and nonconvex problems.\\\" IEEE Transactions on Automatic Control 62.11 (2017): 5522-5537.\\n\\n[2] Hou, Jie, et al. \\\"Distributed momentum-based Frank-Wolfe algorithm for stochastic optimization.\\\" IEEE/CAA Journal of Automatica Sinica 10.3 (2022): 685-699.\"}",
"{\"summary\": \"The paper studies the decentralized constrained finite-sum optimization problem and provides a projection-free algorithm called DVRGTFW. In the convex and non-convex cases, the sample complexities $\\\\mathcal{O}(n+\\\\sqrt{n/m}L\\\\varepsilon^{-1})$ and $\\\\mathcal{O}(\\\\sqrt{n/m}L^2\\\\varepsilon^{-2})$ are established, respectively. Numerical experiments validate the performance of the algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper shows better theoretical convergence results compared to previous works. Specifically, by incorporating techniques such as gradient tracking and multi-consensus, it extends constrained finite-sum algorithms to the decentralized setting. The convergence of DVRGTFW is analyzed using Lyapunov functions, theoretically establishing improved sample and communication complexities, which is also validated by numerical experiments.\", \"weaknesses\": \"While improved theoretical results are established for decentralized Frank-Wolfe method, the techniques are overall similar to existing ones.\", \"questions\": \"1. Should the sample complexity in the non-convex case be $\\\\mathcal{O}(n + \\\\sqrt{n/m}L^2\\\\varepsilon^{-2})$? Letting $m = 1$, the problem reduces to the centralized finite-sum setting, where the sample complexity should be $\\\\mathcal{O}(n + \\\\sqrt{n}\\\\varepsilon^{-2})$ or $\\\\mathcal{O}(n\\\\varepsilon^{-2})$, as shown in [1].\\n\\n2. In Table 1, is a direct comparison of convergence rates with [2] appropriate? Specifically, this paper addresses a finite-sum problem, whereas [2] deals with an online setting. Since DVRGTFW cannot be directly applied to the online problem, such a comparison may be inappropriate. The authors should at least point out the differences in settings when making these comparisons.\\n\\n3. Finally, there are some minor issues, such as typos. \\n- The Lyapunov functions defined in L.739 use the symbols $\\\\Phi$ and $\\\\Psi$ , but in several places in the following proofs, they are written as $\\\\phi$ and $\\\\psi$ (L.994, L.1069, L.1076, L.1082, and L.1085).\\n- L.818. ``fastMix'' should be ``FastMix''.\\n- The paper [1] has been accepted in ICML and the reference should be updated.\\n\\n---\\nReferences\\n\\n[1] Aleksandr Beznosikov, David Dobre, and Gauthier Gidel. Sarah frank-wolfe: Methods for constrained optimization with best rates and practical features. In ICML, 2024.\\n\\n[2] Hoang Huy Nguyen, Yan Li, and Tuo Zhao. Stochastic constrained decentralized optimization for machine learning with fewer data oracles: a gradient sliding approach. arXiv preprint arXiv:2404.02511, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank for addressing most of my concerns. However, the major issue, limited novelty, still remains. I will keep my score.\"}",
"{\"comment\": \"1.Decentralized constrained optimization is definitely a non-trivial problem. Starting from the initial distributed unconstrained methods with linear convergence rates [7], it took approximately six years to propose the first distributed gradient tracking proximal algorithm with linear convergence rates [8]. However, as mentioned in our paper, projection into complex sets, such as trace norms , can be computationally expensive. For example, for the trace norm constraint, FW can leverage the power method to efficiently obtain only the largest eigenvalue of the matrix, avoiding the costly computation of a full SVD [see this note, page10] (https://www.stat.cmu.edu/~ryantibs/convexopt-F18/lectures/frank-wolfe.pdf). To address the issue of high computational costs, methods such as the Frank-Wolfe algorithm can be utilized to find solutions in the constraint set via a linear oracle, which is usually cheaper in computation cost than using a direct projection.\\n\\nRegarding the bounded set assumption you mentioned, it is a standard assumption in Frank-Wolfe type algorithms, which applied in both centralized setting [9] and distributed setting [10]. Additionlly, consindering Question 5, we think that the mentioned term ``gradient variance`` refers to $\\\\sum_{i=1}^m \\\\Vert \\\\nabla f_i(x) - \\\\nabla f(x)\\\\Vert^2$, which measures the variance between nodes' gradients and the global one. While, this gradient variance **does not** need to considered in our method due to the use of gradient tracking technique, which utilizes bias correction to compensate heterogeneous gradient [18] and can overcome the heterogeneity. Hence, our proof **does not** rely on this assumption. If this gradient variance is considered as $\\\\sum_{i=1}^m \\\\Vert \\\\nabla f_{i, \\\\xi}(x) - \\\\nabla f_i(x)\\\\Vert^2$, which arises from stochasity, this is a stand assumption in stochastic optimization, regardless of whether the problem is constrained or unconstrained. Therefore, we do not see any reason that this bounded gradient variance render our proof trivial.\\n\\nBesides, the proof of Frank-wolfe is not easier than its counterpart, e.g., stochastic gradient descent (SGD). \\nFor instance, as shown in [19, 20], the stochastic frank-wolfe is actually more complexty than SGD due to the need for an additional gradient estimator to ensure convergence. The analysis of the bound between this gradient estimator and the true gradient is more intricate compared to that in SGD.\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"comment\": \"2.The DR-submodular problem is indeed mentioned in the article, but we only use it as an example in the related work section to demonstrate the development of the decentralized Frank-Wolfe algorithm for this problem. In fact, the DR-submodular problem is just a subset of our broader problem. The DR-submodular problem makes additional assumptions about the function, which our paper does not require. We are comparing more general algorithms, such as DeFW [10], DstoFW [11], DMFW [12] and I-PDS [13] (the assumption on the constraint sets has some differences). The contribution of this article has been highlighted in the contribution section of the introduction, this article has the best-known IFO complexity both in the convex case and the non-convex case and nearly optimal communication complexity in non-convex case compared to the previous decentralized stochastic Frank-Wolfe algorithm. The contribution is significant in variance-reduction type problems.\\n\\n3.Acceleration method is commonly used in distributed algorithms to reduce the dependence on spectral gap for the convergence speed, for more details, please refer to [14] , if you need it, we can add a section about acceleration method in the introduction. Regarding $\\\\bar{y}_t=\\\\bar{v}_t$, we have mentioned it in Lemma 2 in Appendix A.\\n\\n4.In line 778-779, we have briefly outlined the derivation of the proof in lines 764. The proof process goes as follows: starting from the update of $\\\\mathbf{d}\\\\_t=argmin\\\\_{\\\\mathbf{d}\\\\in\\\\mathcal{X}}\\\\langle \\\\mathbf{y}\\\\_{t},d\\\\rangle$ in Algorithm 1, we can easily deduce that for each $i\\\\in[m]$, $\\\\langle \\\\mathbf{y}\\\\_{i,t},\\\\mathbf{d}\\\\_{i,t}-\\\\bar{x}\\\\_t \\\\rangle \\\\leq \\\\langle \\\\mathbf{y}\\\\_{i,t},x^*-\\\\bar{x}\\\\_t \\\\rangle$, then we use the term $\\\\langle \\\\mathbf{y}\\\\_{i,t},x^*-\\\\bar{x}\\\\_t \\\\rangle$ to substitute the term $\\\\langle \\\\mathbf{y}\\\\_{i,t},\\\\mathbf{d}\\\\_{i,t}-\\\\bar{x}\\\\_t \\\\rangle$, add it to the third term $\\\\langle \\\\nabla f(\\\\bar{x}\\\\_{t})-\\\\mathbf{y}\\\\_{i,t},\\\\mathbf{d}\\\\_{i,t}-\\\\bar{x}\\\\_{t}\\\\rangle$ in line 762 and sum up to obtain the result in line 764.\\n\\n5.A key feature of gradient tracking is a tracking mechanism that allows to overcome data heterogeneity between nodes [18].\\n\\n6.Please refer to the third point, the acceleration algorithm [15,16,17] can reduce the dependence on spectral gap for the convergence speed, and as we have mentioned in Remark 1, the multi-consensus step in FastMix enable our analysis closed to the centralized algorithm. Moreover, the acceleration algorithm was not initially introduced in this work, we refrain from providing extensive insights into its mechanics.\\n\\nReferences\\n\\n[1] Ling, Qing, et al. \\\"Decentralized low-rank matrix completion.\\\" 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2012.\\n\\n[2] Mackey, Lester W., Ameet Talwalkar, and Michael I. Jordan. \\\"Distributed matrix completion and robust factorization.\\\" J. Mach. Learn. Res. 16.1 (2015): 913-960.\\n\\n[3] Yu, Hsiang-Fu, et al. \\\"Scalable coordinate descent approaches to parallel matrix factorization for recommender systems.\\\" 2012 IEEE 12th international conference on data mining. IEEE, 2012.\\n\\n[4] Lacoste-Julien, Simon. \\\"Convergence rate of frank-wolfe for non-convex objectives.\\\" arXiv preprint arXiv:1607.00345 (2016).\\n\\n[5] Lafond, Jean, Hoi-To Wai, and Eric Moulines. \\\"On the online Frank-Wolfe algorithms for convex and non-convex optimizations.\\\" arXiv preprint arXiv:1510.01171 (2015).\\n\\n[6] Duchi, John, et al. \\\"E cient projections onto the 1-ball for learning in high dimensions.\\\" Proceedings of the 25th International.\\n\\n[7] Shi, Wei, et al. \\\"Extra: An exact first-order algorithm for decentralized consensus optimization.\\\" SIAM Journal on Optimization 25.2 (2015): 944-966.\\n\\n[8] Alghunaim, Sulaiman, Kun Yuan, and Ali H. Sayed. \\\"A linearly convergent proximal gradient algorithm for decentralized optimization.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[9] Jaggi, Martin. \\\"Revisiting Frank-Wolfe: Projection-free sparse convex optimization.\\\" International conference on machine learning. PMLR, 2013.\\n\\n[10] Wai, Hoi-To, et al. \\\"Decentralized Frank\\u2013Wolfe algorithm for convex and nonconvex problems.\\\" IEEE Transactions on Automatic Control 62.11 (2017): 5522-5537.\\n\\n[11] Jiang, Xia, et al. \\\"Distributed stochastic projection-free solver for constrained optimization.\\\" arXiv preprint arXiv:2204.10605 (2022).\\n\\n[12] Hou, Jie, et al. \\\"Distributed momentum-based Frank-Wolfe algorithm for stochastic optimization.\\\" IEEE/CAA Journal of Automatica Sinica 10.3 (2022): 685-699.\\n\\n[13] Nguyen, Hoang Huy, Yan Li, and Tuo Zhao. \\\"Stochastic Constrained Decentralized Optimization for Machine Learning with Fewer Data Oracles: a Gradient Sliding Approach.\\\" arXiv preprint arXiv:2404.02511 (2024).\\n\\n[14] d\\u2019Aspremont, Alexandre, Damien Scieur, and Adrien Taylor. \\\"Acceleration methods.\\\" Foundations and Trends\\u00ae in Optimization 5.1-2 (2021): 1-245.\"}"
]
} |
|
17U3nlco2r | ChebyNet: Boosting Neural Network Fitting and Efficiency through Chebyshev Polynomial Layer Connections | [
"Yue Xin",
"Jiarui Zhang",
"Ziyang Zheng",
"Yaoming Wang",
"Wenrui Dai",
"Chenglin Li",
"Junni Zou",
"Hongkai Xiong"
] | Traditional deep neural networks (DNNs) predominantly adhere to a similar design paradigm. Even with the incorporation of additive shortcuts, they lack explicit modeling of relationships between non-adjacent layers. Consequently, this paradigm constrains the fitting capabilities of existing DNNs. To address this issue, we propose ChebyNet, a novel network paradigm to build Chebyshev polynomial connections between general network layers. Specifically, we establish recursive relationship among adjacent layers and polynomial relationship between non-adjacent layers to construct ChebyNet, which improves representation capabilities of the network. Experimentally, we comprehensively evaluate ChebyNet on diverse tasks, including function approximation, semantic segmentation, and visual recognition. Across all these tasks, ChebyNet consistently outperforms traditional neural networks under identical training conditions, demonstrating superior efficiency and fitting properties. Our findings underscore the potential of polynomial-based layer connections to significantly enhance neural network performance, offering a promising direction for future deep learning architectures. | [
"DNN",
"Chebyshev Polynomial"
] | https://openreview.net/pdf?id=17U3nlco2r | https://openreview.net/forum?id=17U3nlco2r | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"eT4VOtumxy",
"d0ZnKoCRXI",
"b5p12pkifp",
"Z7kihS5leJ",
"KQqlOKWoAP",
"0NGVzLpBJ5"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1730240763204,
1729957026275,
1730654750073,
1730979650889,
1730714976286,
1732097391503
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8968/Reviewer_F5nM"
],
[
"ICLR.cc/2025/Conference/Submission8968/Reviewer_f6mi"
],
[
"ICLR.cc/2025/Conference/Submission8968/Reviewer_rj4R"
],
[
"ICLR.cc/2025/Conference/Submission8968/Reviewer_6oyp"
],
[
"ICLR.cc/2025/Conference/Submission8968/Reviewer_ryV2"
],
[
"ICLR.cc/2025/Conference/Submission8968/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"Current architectures used in the field of deep learning are limited in the modeling of relationships between non-adjacent layers. Skip-connections are popular additive methods for connecting non-adjacent layers, and prior work has explored the use of polynomial functions to establish relationships between layers. In this work, Chebyshev polynomials of high order have been applied to several modern architectures and evaluated on several tasks. The experiments show that adding Chebyshev polynomials to the architectures can help improve performance slightly when compared with certain baselines.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Originality\\nThis work explores an under-explored direction of research. Prior work has not carried out such an extended study.\\n\\nQuality\\nThe method has been applied to many different tasks, trying to show the potential applications to several areas where deep learning is traditionally applied.\\n\\nSignificance\\nSpecific problems will require specific biases, and Chebyshev polynomials are indeed an interesting way to provide useful modeling biases to the available architectures.\", \"weaknesses\": \"The contribution is not clear. The first contribution claimed states that ChebyNet is introduced. However, several architectures are apparently used in the experiments, where Chebyshev polynomials are somehow applied to existing architectures to attempt improvements in their performance. Please clarify the first contribution. The second contribution claims that ChebyNet consistently surpasses existing networks, however, in the experiments it is clear that this is only true for certain hyperparameter choices (which are not clear).\\n\\nThe methodology is far from being clear or reproducible. The polynomials are described, but how they are applied to the network architecture is never explained in sufficient detail to allow an expert to reproduce the results obtained. No details on the architecture structure, no pseudocode of the implementation, no details of the optimizer used for each experiment (it is mentioned only for one) or the learning rate, weight decay, or other details on data augmentation, and so on.\\n\\nThere are writing issues with the manuscript, please read it over again and fix grammatical and typographical mistakes (e.g. Implimentation as the title of section 3.3).\\n\\nNumerical function approximation loss un Figure 2 shows Loss against Order. What does Order mean for an MLP? Please, again, do not place results and experiments without explaining what was done. Why is the MLP failing to fit a quadratic function? Was the error achieved exactly 0? This might not be surprising given that polynomials are part of the architecture itself, but would have been interesting a more in-depth discussion.\\n\\nThe FID obtained on MNIST appears very high for both the Cheby-UNet, and the baseline UNet. More details on the hyperparameters used would help understand the performance. The quality of the samples also appears qualitatively worse than a simple UNet-based implementation of diffusion available on GitHub (https://github.com/bot66/MNISTDiffusion). More details are necessary.\\nThe use of FID as a metric is not sufficient. As the objective is showing the ability of the architecture to fit complex functions, the log-likelihood would have also been an important metric to display, as it more closely shows the ability of a model to fit complex functions. MNIST is not enough, and at least CIFAR10 should have been used. I would also suggest CelebA, which appears more complex but is actually quite simple compared to CIFAR10 for a generative model.\\n\\nThere was no discussion or acknowledgment of the limitation which comes from using models with different parameter count. From the text it is not clear whether the parameters count was kept constant, or if at least the comparison could be considered fair in all experiments.\", \"questions\": \"Clear method section, with a straightforward explanation of the architectures used would go a long way in understanding the significance of the experiments.\\n\\nWhat was the reason behind the choice of experiments and baseline architectures? \\n\\nWhy are Chebishev polynomials particularly good? This is not really clear from the text. Are they a fundamental ingredient that all modern architectures should use to propel their performance further?\\n\\nCould you run some experiments regarding the precise type of polynomials used, or more clear ablations on where and how the polynomials were applied?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes ChebyNet, a neural network architecture that uses Chebyshev polynomial connections to boost the network's fitting capabilities and efficiency. The idea is to go beyond typical additive shortcuts, adding both recursive connections between adjacent layers and polynomial-based relationships between non-adjacent layers. The authors demonstrate the effectiveness of ChebyNet on various tasks, like function approximation, image classification, and semantic segmentation, showing that it often outperforms standard networks with fewer parameters.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. Novel Use of Polynomials: Applying Chebyshev polynomial connections for inter-layer relationships is an interesting twist that brings more flexibility to the model's structure.\\n2. Versatility Across Tasks: The method shows improvements across different tasks, suggesting it has general applicability.\", \"weaknesses\": \"1. Implementation: The implementation details are unclear. In the methodology section, Equations 3 and 4 outline the connectivity patterns between layers, but there is no specific guidance on how to apply these connections to complex architectures like UNet (as shown in Tables 1 and 4), or ResNet and MobileNet (as shown in Table 5). This raise serious problems when I want to to dive in the details of the paper. Those details are also not incorporated in the Appendix as well.\\n\\n2. Unclear Use of Polynomials: The method appears to focus on a recursive layer connectivity similar to Chebyshev polynomials, but it doesn't actually involve using polynomials. Equation 4 resembles Equation 3 but starts with a different initial condition, leading to entirely different sequences in the recursion.\\n\\n3. Computation: While the paper claims the efficiency (Line 88-89, \\\"fewer parameters and reduced computational overhead\\\"), there is no actually discussion on the real computation gain with respect to different applications. From my understanding, with increased connectivity, there is a likelihood of higher computational costs, which is why architectures like DenseNet, despite their strong performance, are not widely adopted in real-world applications. The paper does not sufficiently discuss how ChebyNet handles the potential slowdown due to the additional polynomial connections.\\n\\n4. Limited Baseline Comparisons: The paper mainly introduce a new type of connectivity of layers, which is more on par for ResNet and DenseNet. However, the comparisons are mostly against basic versions of popular models. Adding comparisons with more sophisticated connectivity strategies would strengthen the results and make the findings more convincing.\", \"questions\": [\"How do you integrate the Chebyshev connections into complex architectures like ResNet or UNet? Can you provide more concrete details?\", \"Given that Equation 4 diverges from the typical polynomial sequence, what justifies calling the method polynomial-based? Is the benefit truly coming from the polynomial structure or just from additional learned connections?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"In this paper, the author addresses the issue that previous neural network architectures fail to explicitly model the relationships between different layers. To mitigate this, the author introduces ChebyNet, which models the recursive and polynomial relationships between layers. To validate the proposed method, the authors conduct several experiments across various tasks. The results demonstrate that incorporating relationships between layers enhances performance and suggests a promising direction for network structural design.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper is well-crafted, effectively illustrating the concepts and experimental results.\\nTo validate the proposed method, the authors conduct extensive experiments across various tasks, including image classification and image segmentation. The outcomes of these experiments confirm the efficacy of the proposed method.\", \"weaknesses\": \"Despite the demonstrated experimental improvements, I have several concerns regarding the proposed method. Firstly, could the authors provide an analysis of the memory usage, inference time, and training time of the proposed method? I am interested in determining whether it requires additional resources to train the model. Additionally, the use of MNIST and CIFAR datasets might not be sufficient to thoroughly validate the method; could the authors present results on larger datasets? Furthermore, could the authors discuss the robustness of the proposed method? While modeling the relationship between different layers may increase the capacity of the model, it could also increase the risk of overfitting.\", \"questions\": \"please refer to weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper studies a new approach for interactions between non-adjacent layers in neural networks. Existing interactions among non-adjacent layers are typically studied as additive shortcuts as ResNets, dense connections, and attention mechanisms. This paper aims to bring a new type of interactions between nonadjacent network layers, by multiplying Chebyshev polynomials of inputs (up to a downsampling) to features as element-wise or Hadamard products. Since the 0-order Chebyshev polynomial is the identity, such a scheme can be regarded as an extension of existing network features $f(x)$ to those multiplied element-wisely by sums of high order Chebyshev polynomials of inputs, i.e. $f(x) \\\\circ \\\\sum_{i=0}^n L_i(g(x))$, where $L_i$ is defined as the Chebyshev polynomials of the first kind recursively and $g(x)$ is a downsampling operation to align the dimensionality of input $x$ with the feature $f(x)$.\\n\\nThe motivation of exploiting Chebyshev polynomials roughly lies in the fact that their roots, Chebyshev nodes, actually provide a tight bound in polynomial interpolation of continuous functions that minimizes the Runge oscillation phenomenon. Moreover, Chebyshev polynomials in the first kind has a recursive representation which can be easily implemented with deep neural networks.\", \"the_utility_of_such_a_construction_is_demonstrated_by_several_experiments\": \"low dimensional numerical function approximation, MNIST image generation using UNet-diffusion, learning of some dynamical systems (2-body, 3-body, and real pendulum problem), UNet image segmentation (ACDC and BraTS19), and image classification (Cifar-10 and Cifar-100).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The idea of multiplying Chebyshev polynomials of inputs to features as element-wise (Hadamard) product is novel.\\n\\n2) Experiments show that such ChebyNet of using high order (1,...,9) Chebyshev polynomials may often improve the precision over the plain networks and be often better than using ordinary polynomials (PolyNet).\", \"weaknesses\": \"1) The reproducible codes are not provided with the paper anonymously. Since it is mainly an experimental paper for ICLR, reproducible research is necessary to evaluate the experimental results.\\n\\n2) The motivation of designing ChebyNet architecture seems not clear enough. Among the various possibility of non-adjacent layer interactions, why do the authors choose elementwise product between the sums of Chebyshev polynomials of inputs and features? It seems to me that the 0-th order Chebyshev polynomial recovers the original networks. But what about high order polynomials? Why not additive form? Why not using weighted sum of polynomials while the weights could be tuned?\\n\\n3) One motivation of exploiting Chebyshev polynomials roughly lies in the fact that their roots, Chebyshev nodes, actually provide a tight bound in polynomial interpolation of continuous functions that minimizes the Runge oscillation phenomenon. Does this property lead to any particular consideration of constructing ChebyNet architecture? Moreover, why does the recursive formulation provide superior numerical stability?\\n\\n4) In the performance metrics, margins of improvement over the baseline are sometimes small and we are not sure if the improvements are significant. Since this is the first kind of experiments for the proposed methods, it would be better to include certain error bars to account for randomness in evaluations. \\n\\n5) Figure 5 shows the differences between Cheby-CNN and Poly-CNN in image classification, highlighting the negative correlations on the low orders (0-2) Cheby-CNN. The authors suggest that \\\"The strong correlation among high-order features suggests that low-order features are already sufficient for representing the underlying information, indicating potential for parameter compression\\\". However, from Table 5, middle to high order polynomials seem with high performance as well. Is there a principle in polynomial order selections? By the way, in the last row of this table, some number like 77.1, 76.8 seems missing the highlighted bold font as they are higher than the baseline.\", \"questions\": \"See the questions raised above in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces ChebyNet, a novel architecture aimed at enhancing neural networks by fostering connections between non-adjacent layers, an area typically underexplored in conventional networks. The core motivation is the limited interaction between distant layers, which can constrain a network's capacity to model complex functions. ChebyNet addresses this by employing Chebyshev polynomial basis functions to augment layer connections, which are then fused with outputs, effectively enhancing the network\\u2019s representational power. The proposed method is evaluated across various tasks, including regression, image generation on MNIST, and classification, demonstrating that ChebyNet is versatile and improves performance in numerous settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tChebyNet is adaptable and can be seamlessly integrated into multiple existing architectures, such as UNet and HNN, with minimal implementation complexity.\\n2.\\tThe approach is tested on a range of tasks and exhibits performance gains in most cases, supporting its practical efficacy.\\n3.\\tChebyNet shows a robust capacity for approximating various mathematical functions, with Table 6 in the appendix indicating superior performance over MLP and Poly-MLP in approximating elementary functions like sign(x) and tanh.\", \"weaknesses\": \"1.\\tThe motivation for ChebyNet could be further clarified. While the paper states that existing networks lack inter-layer connections, there are established models, like DenseNet, that enhance layer interactions. Thus, the benefit of Chebyshev polynomial-based connections versus simpler dense, residual or pyramid connections remains unclear.\\n2.\\tFigure 1 could be refined for clarity, as it currently suggests that polynomial connections link the network\\u2019s input and output directly, whereas, according to the text part, these connections are applied within layers.\\n3.\\tThe method\\u2019s evaluation is limited to small-scale datasets. Testing on larger benchmarks, such as ImageNet, would provide a more compelling demonstration of its scalability.\\n4.\\tWhile ChebyNet is posited to improve non-adjacent layer interactions, the paper lacks strong empirical / theoretical evidence to substantiate this claim fully.\\n5.\\tThe paper claims that Chebyshev connections can enhance the efficiency of DNNs; however, no experiments are provided to validate this claim. To my knowledge, additional connections may introduce extra memory and I/O overhead during inference. Supplementary experiments demonstrating the efficiency benefits would strengthen the paper.\\\"\", \"questions\": \"Please refer to the weaknesses, particularly the motivation for using Chebyshev polynomials to enhance DNNs. How does this method compare to simpler residual or dense connections in terms of inter-layer interaction benefits?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}"
]
} |
|
1762Fbr4HK | Deep Generative Modeling for Identification of Noisy, Non-Stationary Dynamical Systems | [
"Doris Voina",
"J. Nathan Kutz",
"Steven Brunton"
] | An important challenge in many fields of science and engineering is making sense of time-dependent measurement data by recovering governing equations in the form of differential equations. We focus on finding parsimonious ordinary differential equation (ODE) models for nonlinear, noisy, and non-autonomous dynamical systems and propose a machine learning method for data-driven system identification. While many methods tackle noisy and limited data, non-stationarity – where differential equation parameters change over time – has received less attention. Our method, dynamic SINDy, combines variational inference with SINDy (sparse identification of nonlinear dynamics) to model time-varying coefficients of sparse ODEs. This framework allows for uncertainty quantification of ODE coefficients,
expanding on previous methods for autonomous systems. These coefficients are then interpreted as latent variables and added to the system to obtain an autonomous dynamical model. We validate our approach using synthetic data, including nonlinear oscillators and the Lorenz system, and apply it to neuronal activity data from C. elegans. Dynamic SINDy uncovers a global nonlinear model, showing it can
handle real, noisy, and chaotic datasets. We aim to apply our method to a wide range of problems, specifically to dynamic systems where complex parametric time dependencies are expected. | [
"system identification",
"non-autonomous differential equations",
"dynamical systems",
"variational inference",
"variational autoencoders",
"SINDy",
"sparse regression",
"uncertainty quantification",
"latent variable discovery",
"biophysics applications",
"biology",
"neuroscience"
] | https://openreview.net/pdf?id=1762Fbr4HK | https://openreview.net/forum?id=1762Fbr4HK | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yH5CpDkGTG",
"xuOzArIA2O",
"keXGq3XjZk",
"aTEblZuaOF",
"QoaFhcgDRo"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1730626321605,
1730568269918,
1732502198565,
1730641229086,
1730717551974
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12372/Reviewer_zDkz"
],
[
"ICLR.cc/2025/Conference/Submission12372/Reviewer_Mq5x"
],
[
"ICLR.cc/2025/Conference/Submission12372/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12372/Reviewer_NQwk"
],
[
"ICLR.cc/2025/Conference/Submission12372/Reviewer_sBqV"
]
],
"structured_content_str": [
"{\"summary\": \"The authors propose DynamicSINDY, an approach that aims to learn sparse ODEs with time-varying coefficients from noisy, non-stationary time series data. The authors achieve this by combining SINDy with sequential VAEs to probabilistically infer the coefficients and their time-varying values. The method is evaluated on three synthetic datasets and a calcium imaging dataset of C. elegans.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors address an important and often overlooked problem in modeling time series data: learning interpretable models from non-stationary dynamical systems.\", \"The problem and the proposed method are presented clearly, and the paper is generally easy to follow.\"], \"weaknesses\": [\"While the motivation of the problem is clear, I find that the conducted experiments fail to convince the reader of the method's impact in real-world settings. The experiments mainly focus on synthetic datasets that are artificially created to fit this problem and avoid most challenges often encountered in real-world datasets (high dimensionality, non-Gaussian noise, large space of possible coefficients). I think the paper would be greatly benefit if the authors can demonstrate the method in such contexts.\", \"The C. elegans dataset used to demonstrate the method is quite simple, and the method is only applied to low-dimensional representations obtained via PCA (which in this case is enough to explain the data). The impact of using DynamicSINDY in this case is not well motivated, and the obtained results don't add any much scientific insights, especially since identifiability is not discussed.\", \"While this is not necessarily always a weakness, the proposed method is a straightforward combination of two existing approaches. Taken together with the limited experiments section and the lack of significant technical innovations, I find the overall contribution of the paper in its current form rather limited.\"], \"questions\": [\"Variational inference is known to provide overconfident uncertainty estimates because of the KL term encouraging mode-covering behavior. Can the authors discuss this further and the impact of this on the method?\", \"How robust are the identified parameters? Is there a quantitative relationship between the robustness and (1) the size of the library, (2) the level of noise in the system?\", \"It is stated that the method can only deal with non-stationarity arising from separable time-varying variables. Can the authors elaborate more on why this is the case?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": [\"This paper presents dynamic SINDy, a machine learning method designed to identify the governing equations of noisy, nonlinear, and non-autonomous dynamical systems from data. Dynamic SINDy combines variational inference with sparse identification of nonlinear dynamics to identify time-varying coefficients in sparse ordinary differential equations. The method is particularly valuable for non-stationary systems with changing parameters over time. The contributions include\", \"Modeling Time-Varying Coefficients: Dynamic SINDy employs deep generative models, specifically variational autoencoders (VAEs), to learn the time-varying nature of ODE coefficients. This enables the identification of non-autonomous systems that exhibit complex dynamics.\", \"Uncertainty Quantification: The use of VAEs allows dynamic SINDy to quantify uncertainty in the estimated ODE coefficients. This is crucial for understanding the reliability of the identified model.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Modeling Time-Varying Coefficients: Dynamic SINDy employs deep generative models, specifically variational autoencoders (VAEs), to learn the time-varying nature of ODE coefficients. This enables the identification of non-autonomous systems that exhibit complex dynamics.\", \"Uncertainty Quantification: The use of VAEs allows dynamic SINDy to quantify uncertainty in the estimated ODE coefficients.\", \"Latent Variable Discovery: Dynamic SINDy can effectively uncover hidden (latent) variables that influence the observed dynamics. This is demonstrated through an example with the Lotka-Volterra equations, where only prey population data is available.\", \"Application to Real-World Data: The paper validates dynamic SINDy's capabilities on both synthetic and real-world data, i.e. the C. elegans data.\"], \"weaknesses\": [\"Uncertainty Quantification: though sec 4.2 shows the estimated standard deviation follows the truth, it only confirms the accuracy of estimation. Nonetheless, it is unclear what use the uncertainty could provide.\", \"The examples are mostly using systems that driven by mean/input processes. It's unclear how the proposed method would perform for noise driven dynamics.\", \"In the C. elegans example, the assumed form depends heavily on prior knowledge (dimensionality, input) on the dynamics. Though the proposed method has shown very good performance, it's unclear what scientific insights the proposed method could offer especially for the systems that people have limited knowledge about.\"], \"questions\": [\"What can one do with the uncertainty? Is it necessary for accurate estimate? Would the uncertainty provide insights for scientific questions? Suggestion: elaborate more or showcase scenarios where the uncertainty is useful vs. method.\", \"How does the method perform for noise driven dynamics? e.g. system with multiple meta-stable points or line attractors.\", \"What scientific insights the proposed method could offer for the systems that don't have particular a priori form of ODEs? If we don't know u(t) switches, would it discover that?\", \"Line 461 typo: rLDS\", \"Suggestion: put Dynamics SINDy reconstruction together with SLDS result in Fig. 7.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their thoughtful comments and the time they spent reading our manuscript.\\nWe have decided it is on our best interest right now to withdraw our paper.\\n\\nbest regards,\\n-the authors\"}",
"{\"summary\": \"the authors introduce dynamic sindy \\u2014 an extension of the sindy for uncovering nonstationary dynamical systems from data. the authors use a time-series VAE architecture to map their data to time-varying coefficients that are linearly combined with a fixed library of basis functions to produce an estimate of the data derivative. they conduct experiments using several toy datasets showing that their dynamic sindy can recover the coefficients of time-varying dynamical systems, even in the case that the entire system is not observed. finally, they show on low-dimensional representations of c. elegans neural recordings, that their method recovers a representative dynamical system of the first principal component.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"dynamical system reconstruction or system identification is an important topics, and learning more interpretable models of system dynamics, such as time varying ode representations like the authors, has broad application to many fields. additionally, the authors consider a comprehensive amount of toy experiments \\u2014 i appreciate that the authors considered experiments with several time varying motifs (i.e. switching, sigmoid, switch, fourier) and show that their method can recover the time varying coefficients with a calibrated measure of uncertainty.\", \"weaknesses\": \"i found the separation between what has previously been done in the literature and what are the authors exact main novel contributions to be unclear; for example, more precise statements about the differences with hypersindy (around line 096) would have been very helpful. in its current form, it is hard to parse from the manuscript what their exact technical advances are.\\n\\na lot of real-estate in the paper is taken up by the experimental plots. often i found the amount of information conveyed by the plots disproportionate to the amount of space they take up \\u2014 making more compact figures seems like it would work to the authors advantage. additionally, information could be conveyed better i.e. thick lines and their ordering (i.e. green/blue lines in Fig 2d are not clear). Fig 1B has a lot of small labels and the zoomed out view of timeVAE does not feel like it helps much.\", \"questions\": \"have the authors applied their method to any datasets requiring a higher dimensional latent space to see how quality of the learned dynamical system scales with dimensionality of the latent space?\\n\\nhave the authors considered comparisons to a method more adept at handling more smooth like transitions of dynamics such as [1] which considers a smoothly switching latent system. \\n\\n[1] Kurle, Richard, et al. \\\"Deep rao-blackwellised particle filters for time series forecasting.\\\" Advances in Neural Information Processing Systems 33 (2020): 15371-15382.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes \\u2018dynamic sparse identification of nonlinear dynamics\\u2019 (dynamic SINDy), a deep learning framework for identifying governing equations in noisy, non-stationary and nonlinear dynamical systems (DS). By combining variational autoencoders (VAEs) and previous work on SINDy, it enables unsupervised inference of the underlying ODE systems\\u2019 parameters while extracting a global and parsimonious nonlinear dynamical model. The approach is validated on both synthetic and real-world data and is compared to other methods in the field, demonstrating great potential for scientific machine learning community.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Learning a parsimonious representation of non-autonomous DS is extremely important and relevant in many scientific disciplines, which makes the approach very promising.\", \"I think it is highly interesting that the encoder-decoder architecture is able to predict the ODE parameters with this level of fidelity in an unsupervised fashion (as there is no direct reconstruction loss for the ODE parameter time series involved in the loss function (7)).\", \"The method is tested against other baselines and also on a real-world dataset (C. elegans).\"], \"weaknesses\": [\"The authors should stick to the ICLR style guide and use the 'author, year' reference style instead of mere numerical numbers (i.e. APA style instead of IEEE). This increases readability and helps the reader to understand the train of thought of the authors, as one directly sees on which work the authors base certain statements.\", \"Center box in Fig. 1B is in parts hard to read as (font) sizes vary a lot. I think it would be better to shrink down Fig. 1A a touch and to increase size of Fig. 1B, especially as it describes the main framework of the manuscript.\", \"I also think the figure group titles ('suptitles') above Fig. 1, 2, and 4 are superfluous and their message should be put into the figure caption. This would create additional space (e.g. to compensate for the change in referencing style).\", \"I think \\u2018dynamic HyperSINDy\\u2019 deserves a bit more attention in the main text, which lacks explanation on how this approach really works. Explaining this method in the supplement makes the corresponding results a bit hard to read and almost forces the reader to read the supplement section 1.2.2.\", \"All of the employed (benchmark) datasets are fairly low dimensional (2-3D). The authors do not address the scalability of the method to high dimensional systems (which can not be sufficiently described by the first few PCA components). I think this is a major drawback, as this setting is highly relevant to many real-world systems.\"], \"minor_details\": [\"typo: Fig. 3B y-axis label say \\u201capproxiate std\\u201d\", \"l. 353 It just says 6A and 6B, while the authors probably reference Fig. 5A and 5B? Also l. 360 it says 6C instead of 5C.\", \"typo: supplement l. 262 it says weight decay of 1e5 (I assume 1e-5?)\"], \"questions\": [\"For the switch signals (Fig. 2 a-c, also Fig. 3A low noise setting), the inferred ODE parameter time series seem to exhibit high frequency oscillations on top of the correct switch-like dynamics. Is there an intuitive explanation why the encoder-decoder architecture struggles in inferring the correct switching dynamics and how this could be addressed?\", \"Results of Fig. 3B look rather weak to me, can the authors report Pearson\\u2019s $r$ of noise lvl vs. std?\", \"I\\u2019m confused by section 4.6 & Fig. 7; How exactly does the dynamic SINDy approach compare now to the proposed baseline methods based on SLDS and (vanilla?) SINDy with a group sparsity norm? I think Fig. 7 would be much clearer if the authors would find a design to compare all comparison methods side-by-side.\", \"ll. 409-411: Can the authors provide references for the mentioned studies?\", \"How do other methods like reservoir computing compare to the dynamic SINDy approach qualitatively and quantitatively in the settings discussed in the manuscript (see e.g. [1])?\", \"How does the approach perform on e.g. benchmarks used in [2], which exhibit different bifurcations than the ones discussed in this paper?\", \"I am very happy to increase my score if the authors adequately address my concerns and questions.\"], \"references\": \"[1] K\\u00f6glmayr, Daniel, and Christoph R\\u00e4th. \\\"Extrapolating tipping points and simulating non-stationary dynamics of complex systems using efficient machine learning.\\\" Scientific Reports 14.1 (2024): 507.\\n\\n[2] Patel, Dhruvit, and Edward Ott. \\\"Using machine learning to anticipate tipping points and extrapolate to post-tipping dynamics of non-stationary dynamical systems.\\\" Chaos: An Interdisciplinary Journal of Nonlinear Science 33.2 (2023).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
16kG5aNleS | Transformer Meets Twicing: Harnessing Unattended Residual Information | [
"Laziz Abdullaev",
"Tan Minh Nguyen"
] | Transformer-based deep learning models have achieved state-of-the-art performance across numerous language and vision tasks. While the self-attention mechanism, a core component of transformers, has proven capable of handling complex data patterns, it has been observed that the representational capacity of the attention matrix degrades significantly across transformer layers, thereby hurting its overall performance. In this work, we leverage the connection between self-attention computations and low-pass non-local means (NLM) smoothing filters and propose the Twicing Attention, a novel attention mechanism that uses *kernel twicing procedure* in nonparametric regression to alleviate the low-pass behavior of associated NLM smoothing with compelling theoretical guarantees. This approach enables the extraction and reuse of meaningful information retained in the residuals following the imperfect smoothing operation at each layer. Our proposed method offers two key advantages over standard self-attention: 1) a provably slower decay of representational capacity and 2) improved accuracy across various data modalities and tasks. We empirically demonstrate the performance gains of our model over baseline transformers on multiple tasks and benchmarks, including image classification and language modeling, on both clean and corrupted data. | [
"transformers",
"self-attention",
"oversmoothing",
"nonlocal smoothing",
"nonparametric regression"
] | Accept (Poster) | https://openreview.net/pdf?id=16kG5aNleS | https://openreview.net/forum?id=16kG5aNleS | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xLQIScTLUE",
"tiINLkGGvK",
"rKBqzhrCtq",
"pakqbdg5Ge",
"pRMMthBLjH",
"nNmUXGiHGY",
"kz7wTbyhEp",
"km3dlA1vSG",
"kjOyFArdQm",
"kif2jxPPDU",
"jFUxJA7Jfc",
"j2FHn4o6Ph",
"iNr7y1q1fy",
"gsagV1x98p",
"eXEB6dbwUN",
"cbgQEiPa7i",
"ZcgoGmaAbR",
"YQFYoYIFwj",
"XUOHh3pDkG",
"VXKeXnKOsW",
"TIu1D00YlO",
"SCG4DrCO1h",
"PGlaGFnejg",
"NTXkWEGVeS",
"NJhy156OC1",
"KZo6jDsWSL",
"Jz8IIqDvMI",
"HxOQLlRP8r",
"GNY5HO9zcU",
"G6zAXHz98K",
"F23SGWJNyd",
"CfKunoWUZm",
"CUzWMRpuMl",
"0QJTKCaGvp"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732429007910,
1732067808605,
1732286297019,
1732068263239,
1732283179564,
1733009788624,
1732292246246,
1730480016960,
1732529701331,
1730638997089,
1732069683653,
1732374203356,
1732526182821,
1737523626504,
1732732358285,
1732429583311,
1732292294112,
1734828136874,
1732068687908,
1732068774868,
1732069538086,
1732292352953,
1732695270018,
1732069138110,
1730686016849,
1732548338364,
1730627058486,
1732783860896,
1732067679789,
1732292193661,
1733069267001,
1732371869718,
1732379834998,
1732069194867
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_3adS"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_7QSP"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_3adS"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_3adS"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Area_Chair_5BjA"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_CqxC"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_pR7y"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_3adS"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_CqxC"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Reviewer_pR7y"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4224/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Robustness of Twicing Attention within Unconventional Transformers\", \"comment\": \"We would like to thank the reviewer again for your valuable initial reviews and feedback.\\n\\nWe took our time to also test Linear-Twicing model's robustness againt Word Swap Attack using the same experimental setting as in Section 4.2 of the main text. As we show in Table B1 below, Linear-Twicing Transformer offers a significant improvement of almost 2 PPL points over the baseline Linear Transformer. This further validates that Twicing Attention can indeed enhance robustness of the model even with unconventional attention mechanisms. This result has been included in Table 7 of the revised document appendix.\\n\\n**Table B1:** Clean/Attacked Test PPL on Wikitext-103 under Word Swap attack.\\n| Model | Clean PPL | Attacked PPL |\\n|-------|----------------|----------|\\n| Linear Trans. | 41.26 | 72.13\\n| Linear-Twicing Trans. | **40.61** | **70.40**\\n\\nWe hope this additional result provides additional justification of our insights provided in the previous replies and further addresses your question.\"}",
"{\"title\": \"Summary of Revisions\", \"comment\": \"According to the comments and suggestions from reviewers, we have applied the following changes to the updated version of the paper:\\n\\n1. General edit: We have cut the text on pages 7 and 8 to make the presentation more compact than before as suggested. To fill the created space, we have added more experiments, insights and theoretical explanations for the robustness of Twicing Attention, an aspect of our model which we seem not to have emphasized enough before. In particular, we associate the small bias property of the underlying twicing kernels to the reduction of bandwidth sensitivity [4, 1] and robustness to input perturbations through [5].\\n2. Extra experiments: We have conducted additional experiments on the image segmentation task on ADE20K dataset, and presented the comparison results between DeiT and DeiT-Twicing in Table 3 of the main text evaluated across three key metrics. We observe performance improvements over all metrics as reported. We have also provided the necessary experimental details in Appendix B.5. Besides, we have added a new model NeuTRENO (Nguyen et al, 2023) as a comparison model as requested. Also, we have trained a larger language modeling on Wikitext-103 to verify the scaling potential of Twicing Attention when implemented inside LLMs and obtained a positive answer as reported in Figure 7 and Table 6 of Appndix B.1.\\n3. Extra empirical analysis: As suggested by the reviewer 7QSP, we have provided the evolution of attention heatmaps for DeiT and DeiT-Twicing from early to late layers together with dozen of extra last layer heatmaps for more input images to strengthen our claims in Appendix D.2. We have also extended oversmoothing analysis in Figure 2 by conducting a similar experiment on ADE20K image segmentation task, and the results are positive and shown in Figure 8 in the appendix. In both cases, token smoothing is slower with Twicing Attention, validating our theoretical results.\\n4. Related works: We have added a discussion on the two papers [6, 7] studying the feature maps of Vision Transformers as suggested by the reviewer 7QSP since we found them indeed relevant. We have also added a new paragraph to the section dedicated to the research on robust transformer models building upon Point 1 of our Summary of Revisions.\\n\\n### References:\\n[1]: Newey, W.K., F. Hsieh, and J.M. Robins (2004). \\\"Twicing Kernels and a Small Bias Property of Semiparametric Estimators.\\\" Econometrica, Vol. 72, No. 3, pp. 947\\u2013962.\\n\\n[4]: Stuetzle, W., and Y. Mittal (1979): \\\"Some Comments on the Asymptotic Behavior of Robust Smoothers\\\", in Smoothing Techniques for Curve Estimation, Lecture Notes, 757. New York: Springer-Verlag, 191\\u2013195.\\n\\n[6]: Caron, M., Touvron, H., Misra, I., J\\u00e9gou, H., Mairal, J., Bojanowski, P., & Joulin, A. (2021). Emerging properties in self-supervised vision transformers. Proceedings of the International Conference on Computer Vision (ICCV).\\n\\n[7]: Darcet, T., Oquab, M., Mairal, J., & Bojanowski, P. (2024). Vision transformers need registers. Published as a conference paper at ICLR 2024.\"}",
"{\"title\": \"Additional Experimental Results\", \"comment\": \"Dear reviewers,\\n\\nWe would like to thank all reviewers again for your thoughtful reviews and feedback. We have obtained additional experimental result that validates the concept of Twicing Attention is not tied to the exact form of standard softmax self-attention, but it offers improvements for any reasonable similarity matrices including different types and forms of attention mechanisms as described below.\\n\\nWe have conducted additional experiments with Linear Transformers [9] as described in our previous comment. Table A1 below compares the perplexities recorded for Linear Transformers with feature map $\\\\phi(x) = \\\\text{elu}(x)+1$ matching their original choice, and Linear-Twicing Transformers for which we apply the twicing transformation $2A-A^2$ where $A = \\\\text{normalize}(\\\\phi(Q)\\\\phi(K)^\\\\top)$. Note that we explicitly construct the similarity matrix $A$ for both of the models for our framework to work. On top of Table A1 results, we also observe relatively faster convergence for Linear-Twicing, very similar trend to what is illustrated in Figure 7 in the revised appendix of the paper. The positive results indicate that the applicability of Twicing Attention is not limited to standard softmax self-attention, but it is compatible with any reasonable similarity matrix. We have appended this result to Appendix B.1 and Table 6 of the revised document (highlighted by blue color).\\n\\n**Table A1:** Validation/Test PPL on Wikitext-103 trained for 75 epochs.\\n| Model | Validation PPL | Test PPL |\\n|-------|----------------|----------|\\n| Linear Trans. | 40.00 | 41.26 |\\n| Linear-Twicing Trans. | **39.45** | **40.61** \\n\\nWe would be happy to engage in any follow-up discussion or address any additional comments by the reviewers.\\n\\n**References:**\\n\\n[9]: Katharopoulos, A., Vyas, A., Pappas, N., & Fleuret, F. (2020). Transformers are RNNs: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML). PMLR.\"}",
"{\"comment\": \"Thank you for your thoughtful review and valuable feedback. Below we address your concerns.\\n\\n-----\\n\\n\\n**Q1. The paper compensates for the simplicity of the core idea by over-explaining and being overly verbose. For example, most of the material on pages 7-8 can be summarised in 2-3 paragraphs. Even Algorithm 1 on page 8 is redundant and too verbose. The algorithm's objective is clear and simple: to compute $2A-A^2$. I don't think one needs 12 lines to explain that.**\\n\\n**Answer:** While we intended to make our narrative of NLM denoising and nonparametric regression perspective more comprehensible to the readers, we agree with the reviewer on the point that the content on the page 7, in particular, can be compressed into a more compact text. We have editted that section in our revision to achieve a concise alternative accordingly. We use the created space for extra insights on the robustness of Twicing Attention (Section 3.3), as well as extra experimental results (Section 4.1).\\n\\n**Q2. Instead, the paper could have added to its contribution through a more thorough study. E.g., one avenue for improvement would be to consider other candidates besides the $2A-A^2$ and then compare them in the considered benchmarks.**\\n\\n**Answer:** Even though we agree that there is still room for further empirical studies, we would argue that considering other \\\"candidates\\\" besides $2A-A^2$ is actually a little bit controversial since $2A-A^2$ is an only theoretically justified choice enabling us to study Twicing Attention through the lens of the well-established twicing procedure (which actually sets the paper title) in image reconstruction theory and nonparametric regression regime [3, 4]. The identity $(2A_\\\\ell-A_\\\\ell^2)V_\\\\ell = A_\\\\ell V_\\\\ell + A_\\\\ell(V_\\\\ell - A_\\\\ell V_\\\\ell) = A_\\\\ell V_\\\\ell + A_\\\\ell\\\\cdot \\\\text{r}_{\\\\ell}$ is a quick way of reiterating the core motivation behind this very choice. \\n\\nFor the sake of comparison and further insights, however, we have conducted additional experiments to study other candidates that are intended to approximate the twicing procedure without compromising the baseline efficiency. We report the results in Table A below. Note that each model in Table A is trained for 200 epochs. The compared models in this study are inspired by the core idea of twicing procedure--adding back the smoothed residual to the estimation. We observe that efficient approximations often exhibit faster initial convergence rates; however, they are less stable tend to fall behind the full model in later stages of training, as they struggle to capture and learn the more complex patterns which models are expected to learn in later stages. We still believe that such efficient versions can be made work well, yet we leave it for future research.\\n\\n**Table A:** Comparison of DeiT-Twicing and its efficient approximations as explained.\\n| Model | Top 1 | Top 5 | Explanation |\\n|--------------------|--------------------|--------------------|----|\\n| DeiT | 66.85 | 88.06 |\\n| DeiT-Twicing | **67.43** | **88.45** |\\n| Approx. Twicing [*overlayer residual*] | 67.12 | 88.13 | Using previous layer residual $A_{\\\\ell}(V_{\\\\ell-1} - A_{\\\\ell-1}V_{\\\\ell-1})$ for twicing procedure for efficiency\\n| Approx. Twicing [*temporal residual smoothing*] | 67.08 | 88.06 | accumulating the residuals from previous layers with weight $\\\\frac{\\\\ell}{\\\\ell+3}$ for $\\\\text{r}_{\\\\ell}$. This effectively smoothes the residuals temporally (without \\\"spatial\\\" smoothing via $A$)\\n| Approx. Twicing [*local residual smoothing*] | 67.00 | 88.25 | Using $AV + \\\\text{band}(A, w)(V-AV)$ where $\\\\text{band}(A, w)$ extracts a banded part (diagonal strip) of $A$ of width $w \\\\ll N$ for fast matrix multiplication.\\n\\nFor further comparison with different candidates, we introduced a hyper-parameter into Twicing Attention as $AV + \\\\lambda A(V-AV) = [(1+\\\\lambda)A - \\\\lambda A^2]V$. Then, we train this model on ImageNet classification with $\\\\lambda = 1/2$, which lies right in the middle of baseline self-attention and our Twicing attention, so that it can capture the general effect of such scaling. We present our results in Table B below. While we find that it still offers improvements over the baseline, it falls behind the original Twicing Attention, and justifies the use of the proposed model with theoretical support.\\n\\n**Table B:** Comparison of DeiT-Twicing models using $2A-A^2$ and $(1+\\\\lambda) A - \\\\lambda A^2$ as a similarity matrix.\\n| Model | Top 1 | Top 5 |\\n|---|---|---|\\n| DeiT | 72.00 | 91.14 |\\n| Twicing ($2A-A^2$) | **72.60** | **91.33** |\\n| Twicing ($(1+\\\\lambda) A - \\\\lambda A^2$) | 72.41 | 91.22\\n\\n\\n-----\\nWe hope we have cleared your concerns about our work in this response and revised document. We would appreciate it if we can get your further feedback at your earliest convenience.\"}",
"{\"title\": \"Additional results with unconventional attention mechanisms\", \"comment\": \"We have conducted additional experiments with Linear Transformers [9] as described in our previous comment. Table A1 below compares the perplexities recorded for Linear Transformers with feature map $\\\\phi(x) = \\\\text{elu}(x)+1$ matching their original choice, and Linear-Twicing Transformers for which we apply the twicing transformation $2A-A^2$ where $A = \\\\text{normalize}(\\\\phi(Q)\\\\phi(K)^\\\\top)$. Note that we explicitly construct the similarity matrix $A$ for both of the models for our framework to work. On top of Table A1 results, we also observe relatively faster convergence for Linear-Twicing, very similar trend to what is illustrated in Figure 7 in the revised appendix of the paper. The positive results indicate that the applicability of Twicing Attention is not limited to standard softmax self-attention, but any reasonable similarity matrix can be covered. Lastly, we have added this results in Appendix B.1 and the appendix Table 6 of the revised document.\\n\\n**Table A1:** Validation/Test PPL on Wikitext-103 trained for 75 epochs.\\n| Model | Validation PPL | Test PPL |\\n|-------|----------------|----------|\\n| Linear Trans. | 40.00 | 41.26 |\\n| Linear-Twicing Trans. | **39.45** | **40.61** \\n\\nWe hope this additional results complements our previous response to your question and clears your related concerns. We would be glad to hear your futher feedback on our work and rebuttal at your earliest convenience.\"}",
"{\"title\": \"Future suggestions\", \"comment\": \"I appreciate your effort in adding this experiment (score raised to 6), but I have a future request: could you please add some more analysis regarding finetuning on pretrained off-the-shelf large language models (e.g. LLaMA) in the future revision and conduct through evaluations using LLM eval benchmarks? I do understand that due to limited time in the discussion phase, you are unable to do this.\"}",
"{\"title\": \"Any Questions from Reviewer 3adS on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"summary\": \"This paper introduces the Twicing Attention mechanism, drawing inspiration from established connections between self-attention and low-pass non-local means (NLM) smoothing filters. The authors demonstrate two key advantages of their proposed method: 1) a theoretically proven slower decay of representational capacity across transformer layers, and 2) improved performance on both vision and language tasks across multiple datasets. The paper's primary contribution lies in its theoretical framework. It first establishes that representation collapse in transformers stems from the inherent low-pass characteristics of NLM filters. The authors then provide proof showing that the twicing formulation ($2A^2-A$) offers superior theoretical properties compared to standard attention ($A$), particularly in preserving token diversity and meaningful feature representations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Theoretical foundation: The paper provides analysis connecting self-attention to NLM filters.\", \"Fluent presentation flow: Messages of this paper are presented well, with well-demonstrated background knowledge and motivation.\", \"Empirical validation: The authors provide visualizations of attention heatmaps, which validates their claim that their method preserve the diversity of token representations\"], \"weaknesses\": [\"**Narrow Problem Framing**: The paper's central premise regarding \\\"representation collapse\\\" in transformers warrants closer scrutiny. Recent research has demonstrated that this phenomenon is not inherent to transformer architectures. For instance, DINO(Caron et al., 2021) demonstrates that self-supervised training can produce well-structured, diverse token representations in Vision Transformers. Furthermore, Darcet et al. (2024) provide evidence that apparent \\\"collapse\\\" may actually reflect a more nuanced information distribution pattern, where artifacts in attention heatmaps encode global information while non-artifact tokens maintain distinct representations, albeit with lower similarity to the CLS token.\", \"**Additional computational cost and marginal empirical improvements**: Performance increase in Table 4 is in trade of computational cost. Hardly can engineers be convinced to substitute the original attention with the proposed one.\", \"**Limited Evaluation Scope**: The authors report the empirical performance on classification tasks for vision models. Yet dense tasks such as segmentation are more direct and effective in evaluating the structure of patch representations produced by the method.\"], \"questions\": [\"Visualizations on earlier layers and more heads of the transformers would help to strengthen your claim.\", \"Please refer to the weakness.\", \"I am open to increase my score if you alleviate my concerns.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for your consideration\", \"comment\": \"Thank you for your response and support.\\n\\nWe would greatly appreciate it if you could share any remaining concerns about our work so that we can address them before the rebuttal period concludes. We are more than happy to engage in follow-up discussions to resolve your concerns and kindly ask you to consider whether raising your score to 6 might better reflect your updated evaluation of our paper.\\n\\nThank you once again for your time and thoughtful feedback!\"}",
"{\"summary\": \"The self-attention mechanism's representational capacity diminishes significantly across layers, and this oversmoothing effect is reducing overall performance. This paper introduces Twicing Attention, a novel mechanism that connects self-attention computations with low-pass non-local means smoothing filters. By employing a kernel twicing procedure, it alleviates the low-pass effects of NLM smoothing while preserving meaningful information from residuals. Twicing Attention offers slower decay of representational capacity and improved accuracy across different data modalities. Significant performance improvement brought by Twicing attention is observed in multiple tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Novelty: The authors introduce the Twicing Attention mechanism to slow down the eigenvalue decay associated with representational collapse.\\n2. Theoretical Contribution: the authors provide mathematical validation for the Twicing Attention\\u2019s capability to retain information across layers.\\n3. Experiments: the authors evaluate their proposed methods on both language models and vision models.\", \"weaknesses\": \"1. Limited improvement: The gains in clean data settings (such as on ImageNet in Tab. 1) are modest.\\n2. Lack of comparison: the work does not compare its method with alternative solutions that address oversmoothing, such as regularization strategies.\\n3. Lack of ablations: the authors are suggested to consider applying the proposed method at different layer depths or intervals and evaluate their difference.\", \"questions\": \"My question lies in the efficiency comparison (Tab. 4). Despite the fact that Twicing has the same complexity of $O(N^2 d)$ as claimed in the paper, it still increases the overhead by an additional 50% due to the extra matrix multiplication in line 7, Alg. 1. However, Tab. 4 indicates that implementing Twicing or not will not incur big difference on both speed and GFLOPs. What is the reason behind that? I would appreciate a more detailed efficiency analysis & comparison in the rebuttal phase if possible.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q2. Additional computational cost and marginal empirical improvements: Performance increase in Table 4 is in trade of computational cost. Hardly can engineers be convinced to substitute the original attention with the proposed one.**\\n\\n**Answer:** Even though Twicing Attention offers relatively modest accuracy improvements in clean data settings, we believe that the complimentary robustness comparisons make the Twicing models stand out as substantially better models overall. In particular, Twicing models show capability to offer up to ~19\\\\% improvement (FAN, PGD) with average of about ~8\\\\% performance gains across all attacks. Besides, Figure 4 in the appendix shows that Twicing Attention can notably outperform the baseline across 15 types of natural corruption types consistently (about ~10% improvement on \\\"contrast\\\", \\\"gaussian noise\\\", and \\\"impulse noise\\\" to name but a few). It is worth noting that even many tailored robust models available with similar performances also impose similar additional computational cost. Additionally, this empirical observation is also theoretically consistent and interesting in the following sense: it suggests that Twicing models are inherently more stable across both clean and corrupted data settings by prioritizing stable representations over being specialized too much on clean data accuracy\\u2014an aspect that can make models more susceptible to small perturbations, such as common adversarial attacks. Additionally, drawing on the small-bias property of Twicing kernels in nonparametric regression, one can argue that the resulting estimator is relatively less sensitive to bandwidth selection. This reduced sensitivity mitigates the bias fluctuations often introduced by slight perturbations, making the estimator inherently more resilient to minor perturbations and improve its reliability. Our experimental results under 3 widely adopted adversarial attacks validate that Twicing Attention is indeed significantly more robust compared to the baseline self-attention.\\n\\n**Q3. Limited Evaluation Scope: The authors report the empirical performance on classification tasks for vision models. Yet dense tasks such as segmentation are more direct and effective in evaluating the structure of patch representations produced by the method.**\\n\\n**Answer:** Thank you for your feedback emphasizing the importance of evaluating our method on dense tasks like image segmentation to better assess patch representations. In response to your suggestion, we have conducted additional experiments on image segmentation and report the results in the table below and in Table 3 of the paper:\\n \\n **Table G:** Image Segmentation on ADE20K. \\n| Model | Pixel Acc. | Mean Acc. | Mean IoU |\\n|-------|------------|-----------|----------|\\n| DeiT | 77.25 | 44.48 | 34.73 |\\n| DeiT-Twicing | **77.51** | **45.53** | **35.12**\\n\\nThese results indicate that our proposed DeiT-Twicing method offers improvements across key segmentation metrics, including Pixel Accuracy, Mean Accuracy, and Mean IoU, compared to the baseline DeiT model.\\n\\n**Q4. Visualizations on earlier layers and more heads of the transformers would help to strengthen your claim.**\\n\\n**Answer:** We appreciate the reviewer's suggestion on this matter. Accordingly, we have added the the visualizations from early to late layers as well as Figure 8 on alternative over-smoothing analysis on 2 datasets in Appendix D.2 of the revised document.\\n\\n-----\\nWe hope we have cleared your concerns about our work. We have also revised our manuscript according to your comments, and we would appreciate it if we can get your further feedback at your earliest convenience.\"}",
"{\"title\": \"Thanks for your endorsement!\", \"comment\": \"Thanks for your response, and we appreciate your endorsement.\"}",
"{\"title\": \"Concerns mostly addressed\", \"comment\": \"I would like to thank the authors for their rebuttal. My overhead concern is mostly addressed. I will raise my score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Thanks for your endorsement!\", \"comment\": \"Thanks for your response, and we appreciate your endorsement.\"}",
"{\"comment\": \"We would like to thank the reviewer again for your valuable initial reviews and feedback.\\n\\nFor additional robustness comparison, we benchmarked DeiT-NeuTRENO against ImageNet out-of-distribution and natural image corruptions, and found our model DeiT-Twicing being better in 3 out of 4 tests except ImageNet-A. However, NeuTRENO deteriorates the performance of the baseline DeiT in ImageNet-R, an observation which supports our model DeiT-Twicing to be **more stable** with *no extra hyper-parameters* introduced. This additional result has been included in Table 9 of the revised document appendix.\\n\\n| Model | ImageNet-A ($\\\\uparrow$) | ImageNet-R ($\\\\uparrow$) | ImageNet-C ($\\\\downarrow$) | ImageNet-C (Extra) ($\\\\downarrow$) |\\n|-------|------------|------------|------------|--------------------|\\n| DeiT | 6.97 | 32.22 | 72.21 | 63.68 |\\n| NeuTRENO | **8.36** | 31.65 | 70.51 | 63.56\\n| DeiT-Twicing [10-12] | *8.14* | *32.31* | **70.25** | *62.63*\\n| DeiT-Twicing | 7.66 | **32.74** | *70.33* | **62.46** |\\n___\\nWe hope this additional results complement our previous response to your question and clears your related concerns. We would be glad to hear your futher feedback on our work and rebuttal at your earliest convenience.\"}",
"{\"title\": \"Any Questions from Reviewer CqxC on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"metareview\": \"This paper receives ratings of 8, 5, 6, 6, where the reviewers generally provided a positive assessment of this manuscript. In this paper, the authors propose Twicing Attention, a new self-attention mechanism designed to address representational collapse in transformers caused by over-smoothing. The authors established a connection between self-attention and nonparametric regression using twicing kernels. The proposed Twicing Attention mechanism leverages residuals between attention input and output, providing improvements in representational capacity and robustness. Experimental results also show taht Twicing Attention demonstrates consistent improvements across multiple tasks and benchmarks.\", \"strengths\": [\"The proposed Twicing Attention mechanism introduces a new perspective by leveraging residual information through twicing kernels, offering a novel solution to the representational collapse issue in transformers.\", \"Theoretical Rigor: this paper provides a strong theoretical foundation, connecting self-attention mechanisms to nonparametric regression techniques and demonstrating slower decay of representational capacity.\", \"Comprehensive experiments on vision (ImageNet, ADE20K) and language tasks (WikiText-103) showcase consistent performance improvements over baseline models. Enhanced robustness is also demonstrated under adversarial attacks and distribution shifts, indicating the practicality of the method.\", \"The mechanism is computationally efficient, requiring minimal additional overhead when selectively applied to specific layers of transformers.\", \"By addressing the limitation in transformer models (over-smoothing), the method has the potential to influence various applications in NLP, computer vision, and beyond.\"], \"areas_for_improvement\": [\"While the empirical results are compelling, comparisons with more diverse state-of-the-art methods could strengthen the claims o superiority and broader applicability.\", \"Some aspects of the theoretical framework, while rigorous, could be made more accessible to practitioners through intuitive explanations/visualizations.\", \"While the method is validated on several tasks, demonstrating its effectiveness rigorously across more diverse domains and larger datasets could enhance its impact.\", \"The paper briefly acknowledges limitations but could provide a more detailed discussion of areas for improvement and specific directions for future research.\", \"The reviewers praised the paper\\u2019s strong theoretical underpinnings and empirical results, particularly its contributions to addressing over-smoothing in transformers. The paper also introduces a novel and elegant solution that aligns well with recent trends in enhancing transformer architectures. And therefore we recommend acceptance of this paper.\"], \"additional_comments_on_reviewer_discussion\": \"The authors and reviewers engaged in active and productive discussions during the rebuttal period, which helped refine and clarify the contributions of this manuscript. They highlighted its novel contributions and rigorous theoretical foundations. Minor concerns were raised regarding the clarity of some theoretical aspects and the scope of experimental comparisons, which were largely addressed in the authors\\u2019 rebuttal.\\n\\nThe authors addressed the reviewers\\u2019 concerns effectively, providing clarifications on theoretical aspects, additional ablation studies, and further discussions on the practical implications of their work. The authors also provided thoughtful responses about the method's limitations and its scalability to larger datasets and models, demonstrating awareness of future research directions.\"}",
"{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. Below we address your concerns.\\n\\n-----\\n\\n\\n**Q1. Limited improvement: The gains in clean data settings (such as on ImageNet in Tab. 1) are modest.**\\n\\n**Answer:** We agree that Twicing Attention offers relatively modest accuracy improvements in clean data settings in Table 1. However, the clean data performance is not the main/only claim that we make about our model but improved overall accuracy (both under clean and corrupted data settings). Rather, we believe that the complimentary robustness comparisons make the Twicing model stand out as a substantially better model overall. In particular, Twicing models show capability to offer up to ~19\\\\% improvement (FAN, PGD) with average of about ~8\\\\% performance gains across all attacks. Besides, Figure 4 in the appendix shows that Twicing Attention can notably and consistently outperform the baseline across all 15 types of natural corruption types (about ~10% improvement on \\\"contrast\\\", \\\"gaussian noise\\\", and \\\"impulse noise\\\" to name but a few). Zooming out a little bit, this observation is interesting and consistent with the theory in the following sense: it suggests that Twicing models are inherently more stable across both clean and corrupted data settings by prioritizing stable representations over overtuned accuracy on clean accuracy\\u2014an aspect that can make models more susceptible to small perturbations, such as common adversarial attacks. Additionally, drawing on the small-bias property of Twicing kernels in nonparametric regression, one can argue that the resulting estimator is relatively less sensitive to bandwidth selection [1]. This reduced sensitivity mitigates the bias fluctuations often introduced by slight adjustments, making the estimator inherently more resilient to minor perturbations and improve model's robustness in general. Our experimental results under 3 widely adopted adversarial attacks validate that Twicing Attention is indeed significantly more robust compared to the baseline self-attention. We also refer to [2] for a more detailed robustness of twicing kernels in regression compared to the Nadaraya-Watson estimator kernel before twicing. At the same time, nonetheless, we also see in Table 4 of the revised document that improvements on clean and contaminated data for language modeling are comparable.\\n\\n[1]: Newey, W.K., F. Hsieh, and J.M. Robins (2004). \\\"Twicing Kernels and a Small Bias Property of Semiparametric Estimators.\\\" Econometrica, Vol. 72, No. 3, pp. 947\\u2013962.\\n\\n[2]: Chernozhukov, V., Escanciano, J. C., Ichimura, H., Newey, W. K., & Robins, J. M. (2022). Locally robust semiparametric estimation. Econometrica: Journal of the Econometric Society.\\n\\n**Q2. Lack of comparison: the work does not compare its method with alternative solutions that address oversmoothing, such as regularization strategies.**\\n\\n**Answer:** We have conducted additional experiments comparing our method with an alternative model, NeuTRENO [8], that uses a nonlocal functional regularization to mitigate oversmoothing by constantly fusing with initial layer tokens. In the table below, we report the Top 1/Top 5 accuracy on ImageNet, as well as their robustness against PGD, FGSM and SPSA adversarial attacks. We observe that while both models outperform the baseline DeiT, DeiT-Twicing offers relatively larger improvements in almost all metrics.\\n\\n**Table C1:** ImageNet classification under clean and adversarially attacked settings.\\n| Model | Top 1 | Top 5 | PGD Top1/Top5 | FGSM Top1/Top5 | SPSA Top1/Top5\\n|-------|-------|-------|-----|------|---|\\n| DeiT | 72.00 | 91.14 | 8.16 / 22.37 | 29.88 / 63.26 | 66.41 / 90.29\\n| NeuTRENO | 72.44 | **91.40** | 8.85 / 23.83 | 31.43 / **65.96** | 66.98 / 90.48\\n| DeiT-Twicing | **72.60** | 91.33 | **9.15** / **24.10** | **32.28** / 65.67 | **67.12** / **90.53**\\n\\n[8]: Nguyen, T. M., Nguyen, T. M., & Baraniuk, R. (2023). Mitigating over-smoothing in transformers via regularized nonlocal functionals. NeurIPS 2023.\"}",
"{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q3. Lack of ablations: the authors are suggested to consider applying the proposed method at different layer depths or intervals and evaluate their difference.**\\n\\n**Answer:** In Appendix E, we compare 3 different choices of layer placements for Twicing Attention [1 to 12, 7 to 12, and 10 to 12]. As a result, we observe in particular that overall performance is roughly proportional to the number of Twicing layers in terms of clean data and under adversarial attacks. In Table C2 below, we report 2 more models--twicing at even layers, and using previous layer residual for twicing procedure for efficiency. We observe that even though DeiT-Twicing [even layers] has 6 Twicing Attention layers just as DeiT-Twicing [7-12], the latter model does better than the former. This validates that if one has capability to implement $n$ layers (out of $L > n$ total layers) with Twicing Attention, it is better to place them as the latest n contiguous layers of the transformer.\\n\\n\\n**Table C2:** Ablation of Twicing Attention placed at different layers.\\n| Model | Top 1 | Top 5 | Explanation |\\n|--------------------|--------------------|--------------------|----|\\n| DeiT | 72.00 | 91.14 |\\n| DeiT-Twicing [1-12] | **72.60** | 91.33 | Twicing Attention at all layers\\n| DeiT-Twicing [7-12] | 72.45 | **91.35** | Twicing Attention at last 6 layers\\n| DeiT-Twicing [10-12] | 72.31 | 91.24 | Twicing Attention at last 3 layers\\n| DeiT-Twicing [*even layers*] | 72.42 | 91.28 | Twicing Attention at even layers\\n| DeiT-Twicing [*overlayer residual*] | 72.02 | 91.08 | Using previous layer residual\\n\\n**Q4. My question lies in the efficiency comparison (Tab. 4). Despite the fact that Twicing has the same complexity of as claimed in the paper, it still increases the overhead by an additional 50% due to the extra matrix multiplication in line 7, Alg. 1. However, Tab. 4 indicates that implementing Twicing or not will not incur big difference on both speed and GFLOPs. What is the reason behind that? I would appreciate a more detailed efficiency analysis & comparison in the rebuttal phase if possible.**\\n\\n**Answer:** We appreciate the reviewer\\u2019s attention to a potential source of confusion regarding Table 4. We elaborate on the details of that efficiency analysis as follows. Our model does not add 50% more computational cost as reported is the efficiency statistics considering the end-to-end flow of an input through the transformer. In fact, the additional computation--specifically calculating $A(V - A V)$ (with the pre-computed $AV$ as in Algorithm 1)--only marginally increases the total workload when considering the entire Transformer architecture. While this operation does add extra steps to the attention mechanism, the overall computational cost is dominated by other components, such as the feed-forward networks and linear transformations. These components combined require significantly more computation than the attention mechanism alone. Furthermore, attention layer itself is not doubled in terms of computational complexity since Twicing only adds an extra attention-weighted averaging while the basement of standard self-attention already consists of computing $QK^T$ and $\\\\text{softmax}(\\\\cdot)$ (which we do not repeat for Twicing) other than $AV$ matrix operation. As a result, theoretically, the added computation increases the total computational cost by only roughly about 7% (which is approximately consistent with Table 4 results) if we analyze a modestly simplified Transformer architecture in terms of component-wise runtime complexities and their contributions to the overall computational overhead. Considering the partial model, Twicing [10-12], we see that the additional overall computation is virtually negligible while offering a decent relative performance. It is also worth noting that since Twicing does not introduce any learnable parameters, its contribution to complexity of backward passes is minimal during pre-training.\\n\\n-----\\nWe hope we have cleared your concerns about our work. We have also revised our manuscript according to your comments, and we would appreciate it if we can get your further feedback at your earliest convenience.\"}",
"{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. Below we address your concerns.\\n\\n-----\\n\\n\\n**Q1. Narrow Problem Framing: The paper's central premise regarding \\\"representation collapse\\\" in transformers warrants closer scrutiny. Recent research has demonstrated that this phenomenon is not inherent to transformer architectures. For instance, DINO(Caron et al., 2021) demonstrates that self-supervised training can produce well-structured, diverse token representations in Vision Transformers. Furthermore, Darcet et al. (2024) provide evidence that apparent \\\"collapse\\\" may actually reflect a more nuanced information distribution pattern, where artifacts in attention heatmaps encode global information while non-artifact tokens maintain distinct representations, albeit with lower similarity to the CLS token.**\\n\\n**Answer:** We find the two papers that the reviewer brings to our attention interesting in terms of characterizing and understanding the emergent artifacts in the feature maps. Intriguingly, interpreting artifacts as locations where the model stores global input information\\u2014elegantly handled with extra register tokens in Darcet et al. (2024)\\u2014aligns (in a weak sense) with an image denoising perspective as well. When weighted averaging (blurring) is repeatedly applied, sharp edges are smoothed out, letting global information coming from large background sections dominate the output image. We note that the twicing procedure [3, 4] is tailored to guide the model to benefit from those local information and details before proceeding with another blurring iteration to accomodate both local and global information flow. \\n\\nOn the other hand, there are at least a few fundamental scope differences between the cited papers and ours, and our subject of study is not limited to representation collapse: (i) we mainly focus on the interpretation of our method through the perspective of twicing procedure and its analytical and statistical properties; (ii) while slowing down the decay of representational capacity is one of our contribution, it is not the only one. We believe the theoretical relation to twicing kernels with small bias property and its implications on learning more stable and robust representations is equally important matter of our paper; (iii) Unlike some prior works trying to mitigate over-smoothing completely by constantly fusing with initial layer tokens, we merely aim to slow it down to balance the mitigation of this phenomenon and largely deviating from the native behaviour of transformers to benefit from both worlds. All that being said, it is interesting to note how Twicing Attention heatmaps are usually concentrated over the body of the object while reducing the abovementioned artifacts as shown in a dozen of more sample images in Figure 11 in the appendix. Lastly, attention heatmaps are not the only way we illustrate \\\"collapse\\\", but we observe that, with Twicing Attention, average token similarities across layers indeed increase slower than the baseline as shown in Figure 2, which complements the other visualizations to validate slower collapse. Also, please see newly added Figure 8 in the appendix for both ImageNet and ADE20K oversmoothing analysis as a validation of our theoretical results on slower collapse.\\n\\n**References**\\n\\n[3]: Tukey, J.W. (1977). \\\"Exploratory Data Analysis\\\". Reading, MA: Addison-Wesley.\\n\\n[4]: Stuetzle, W., and Y. Mittal (1979): \\\"Some Comments on the Asymptotic Behavior of Robust Smoothers\\\", in Smoothing Techniques for Curve Estimation, Lecture Notes, 757. New York: Springer-Verlag, 191\\u2013195.\"}",
"{\"title\": \"Any Questions from Reviewer 7QSP on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"title\": \"Thanks to the author for thier detailed reply\", \"comment\": \"Firstly, I'm very sorry for responding so late. The authors explained in detail my doubts about their method and added sufficient experiments to back it up (although I didn't ask for more experiments), the additional experiments added to my doubts and curiosity about the method, and I don't have any more questions to ask, I'm even willing to upgrade my rating because of such an informative response.\"}",
"{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. Below we address your concerns.\\n\\n-----\\n\\n\\n**Q1. Admittedly, the authors' work is very rich and makes a very profound contribution at the theoretical level, but in my humble opinion, the authors' approach serves as a skillful level of reconciliation that moderates the rank collapse in depth, whereas a similar reconciliation skill is actually not uncommon in rank collapse-related research directions. I am not accusing the authors of not being innovative enough, but I hope that the authors can go further at the theoretical level and expand the sequence model that can really reduce the loss of information unlike the classical Transformer.**\\n\\n**Answer:** We thank the reviewer for endorsing the theoretical contributions of the paper. While we agree that manipulating spectrum in different ways is not utterly uncommon in literature, we believe that a modification--in a way that it further connects the attention mechanism to the well-established Twicing procedure in image processing [3] and nonparametric regression [4] with small bias property [1]--is a novel theoretical perspective. Unlike pure engineering heuristics or linear algebraic approaches to moderate the rank collapse, our work offers broader interpretability of Transformers along with the modified attention mechanism. Additionally, we believe that this interpretation can foster further research to study the potential benefits that such traditional statistical frameworks still has to offer for the modern deep learning theory.\\n\\n**Q2. The author's research is more profound, but the experiments are less adequate, with too few test datasets and too few comparison methods. I tend to think that this is the result of too much time constraints, and I hope that the author will add more datasets as well as other experiments on the Transformer if there is enough time.**\\n\\n**Answer:** We appreciate the reviewer's understanding of time constrains. We took our chance to carry out extra image segmentation experiments on another widely adopted dataset ADE20K and report the pixel accuracy, mean accuracy, and mean intersection over union (IoU) metrics to compare against the baseline in Table D below. We find that Twicing Attention offers improvements across all three metrics evaluated.\\n\\n**Table D:** Image segmentation on ADE20K.\\n| Model | Pixel Acc. | Mean Acc. | Mean IoU |\\n|-------|------------|-----------|----------|\\n| DeiT | 77.25 | 44.48 | 34.73 |\\n| DeiT-Twicing | **77.51** | **45.53** | **35.12**\\n\\nFurthermore, we have done experiments with an additional competetitor model, NeuTRENO (Nguyen et al, 2023), that uses a nonlocal functional regularization to mitigate oversmoothing by constantly fusing with initial layer tokens. In the Table E below, we report the Top 1/Top 5 accuracy on ImageNet as well as their robustness against PGD, FGSM and SPSA adversarial attacks as in the paper. We observe that while both models outperform the baseline DeiT, our DeiT-Twicing offers relatively more improvements in almost all metrics.\\n\\n**Table E:** Image classification on ImageNet-1K.\\n| Model | Top 1 | Top 5 | PGD Top1/Top5 | FGSM Top1/Top5 | SPSA Top1/Top5\\n|-------|-------|-------|-----|------|---|\\n| DeiT | 72.00 | 91.14 | 8.16 / 22.37 | 29.88 / 63.26 | 66.41 / 90.29\\n| NeuTRENO | 72.44 | **91.40** | 8.85 / 23.83 | 31.43 / **65.96** | 66.98 / 90.48\\n| DeiT-Twicing | **72.60** | 91.33 |**9.15** / **24.10** | **32.28** / 65.67 | **67.12** / **90.53**\"}",
"{\"summary\": \"The over-smoothing problem in Transformers is a well-known phenomenon, where the outputs of different attention layers in a Transformer model are highly similar. This paper introduces Twicing Attention to address this problem, which uses low-pass NLM smoothing filters to tackle this problem. The core idea can be phrased as, instead of using the standard attention matrix $A$, to use $2A - A^2$.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is relatively easy to follow and well-written.\\n2. The proposed \\\"Twicing Attention\\\" is simple and easy to implement.\\n3. Theoretical motivation and the mathematical details behind their motivation and choices have been provided.\", \"weaknesses\": \"1. The paper compensates for the simplicity of the core idea by over-explaining and being overly verbose. For example, most of the material on pages 7-8 can be summarised in 2-3 paragraphs. Even Algorithm 1 on page 8 is redundant and too verbose. The algorithm's objective is clear and simple: to compute $2A - A^2$. I don't think one needs 12 lines to explain that.\\n2. Instead, the paper could have added to its contribution through a more thorough study. E.g., one avenue for improvement would be to consider other candidates besides the $2A - A^2$ and then compare them in the considered benchmarks\", \"questions\": \"I would be grateful if the authors could respond and address the weaknesses. I am willing to increase my score if the authors could address the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks again for the rebuttal\", \"comment\": \"Again, I sincerely thank the authors for their rebuttal.\\nThe reason why I did not raise my score to 6 lies in some deep-rooted reason behind the topic (that might not be easy to rebuttal)\\nFirstly, I am not quite satisfied with the improvement. The amount of improvement, from my point of view, is not **that** significant in cases like clean data, Tab. 2 and 3. It is noteworthy that we are doing an extra multiplication that adds 1/2 of the attention calculation...\\n\\nSecondly, I am a bit of concerned about the actual applications of this method. In the day and age of LLMs, engineers to rely on off-the-shelf pretrained models very often. How could the proposed method be applied to off-the-shelf pretrained models? Could this be done with low training budget? I think this issue might need further clarification to enhance the applicability of this paper. \\n\\nAnyway, I do think this paper reaches the quality of ICLR from my knowledge and I won't object to the decision of acceptance despite an underrated score.\"}",
"{\"summary\": \"This paper propose the Twicing Attention, a novel attention mechanism that uses kernel twicing procedure in nonparametric regression to achieve slower decay of representational capacity and improved accuracy across various data modalities and tasks. And the design of this module builds on the study of the connection between self-attention and NLM smoothing filters. The method was tested on a public dataset, yielding promising results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written, with clear expression of formulae and symbols, and will be highly readable.\\n2. The authors discuss the recurring problem of decay of representational capacity in Transformer, which has also been recognized as a manifestation of rank collapse in other studies. Instead of directly continuing the study of related work on rank collapse, the authors start with the NLM and try to gradually restore the cause of this phenomenon and again based on the proposed method that can alleviate this problem, the research angle is more interesting and also quite theoretical significance.\\n3. The author's description of the solution is complete and accompanied by thorough proofs, the process is clear and easy to understand, and the work done is very informative.\", \"weaknesses\": \"1. Admittedly, the authors' work is very rich and makes a very profound contribution at the theoretical level, but in my humble opinion, the authors' approach serves as a skillful level of reconciliation that moderates the rank collapse in depth, whereas a similar reconciliation skill is actually not uncommon in rank collapse-related research directions. I am not accusing the authors of not being innovative enough, but I hope that the authors can go further at the theoretical level and expand the sequence model that can really reduce the loss of information unlike the classical Transformer.\\n2. The author's research is more profound, but the experiments are less adequate, with too few test datasets and too few comparison methods. I tend to think that this is the result of too much time constraints, and I hope that the author will add more datasets as well as other experiments on the Transformer if there is enough time.\", \"questions\": \"1. For pure curiosity, I would like to ask what the authors think the performance of this method would be in more extreme cases, which in this case refers to two main scenarios: first, the performance on LLMs with a very large number of parameters. Second, on non-classical Transformer structures, such as Linear Transformer and other analogs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We sincerely appreciate the reviewer\\u2019s explanation of your score and your positive assessment of our paper's quality. We recognize that the applicability of any method to LLMs is a significant factor. While we agree with the reviewer that pretraining a large model with Twicing Attention applied at all layers may not be suitable for low-budget scenarios, we would like to share the following results and insights as a potential use case for Twicing Attention, both with off-the-shelf pretrained models and in full pretraining scenarios with lower budgets.\\n___\\n**Fine-tuning a pretrained Switch Transformer.** To show how Twicing Attention can offer improvements to the pretrained models, we pretrain a medium sized (33M params) Switch Transformer [11], a Mixture of Experts architecture, with the standard self-attention on WikiText-103. Then we finetune this pretrained language model on Stanford Sentiment Treebank 2 (SST-2) dataset using standard self-attention (baseline) as well as Twicing Attention (ours) for 8 epochs. Table L1 compares Top 1 finetune test accuracies for both cases and we find that finetuning with Twicing Attention achieves higher accuracy, provided that the fine-tuning is long enough (usually a few more epochs than usual) for the model to adapt to the new attention mechanism.\\n\\n\\n**Table L1:** Switch Transformer Pretrained on WikiText-103 and Finetuned on SST-2.\\n| Mechanism | Fine-tune Test Acc. | #Params |\\n|----|-----|----\\n| Self-Attention | 77.78 | 33M\\n| Twicing Attention | **78.34** | 33M\\n___\\n**Partial Twicing model.** Additionally, we would like to highlight how DeiT-Twicing [10-12] (last 3 layers only) increases the FLOPs by **just over 1%** while improving robustness by 14.3% (ImageNet-A), 2.7% (ImageNet-C) [Table 2 of the paper] even surpassing the full model, and 5.5% (FGSM) [Table 1 of the paper]. We believe such a partially deployed Twicing Attention allows its application for almost negligible extra cost in practice.\\n___\\nThank you once again for your time and thoughtful feedback!\\n\\n**References:**\\n\\n[11]: Fedus, W., Zoph, B., & Shazeer, N. (2022). Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research.\", \"title\": \"Additional Results for Potential Use of Our Methods for LLM Finetuning\"}",
"{\"title\": \"Global Response\", \"comment\": \"Dear AC and reviewers,\\n\\nFirst of all, we thank all reviewers for their endorsements as well as valuable feedback on our work. In particular, reviewers' positive comments on the clarity of our presentation (pR7y, CqxC, 7QSP), significance of our theoretical contribution (all 4 reviewers), and informativeness of the paper (CqxC, pR7y, 7QSP) have been encouraging for us.\\n\\nIn this global response, we would like to address some of the shared concerns among the reviewers as well as reiterate and clarify what major benefits Twicing Attention can offer, especially other than merely alleviating representation collapse.\", \"we_respond_to_each_common_concern_as_follows\": \"1. **Limited accuracy improvement on clean data.** We agree that Twicing Attention offers relatively modest accuracy improvements in clean data settings. However, the clean data performance is not the only claim that we make about our model but improved overall accuracy (both under clean and corrupted data settings). Rather, we believe that the complementary robustness comparisons make the Twicing model stand out as a substantially better model overall. In particular, Twicing models show capability to offer up to a significant ~19\\\\% improvement (FAN, PGD) with average of about ~8\\\\% performance gains across all adversarial attacks. Besides, Figure 4 in the appendix shows that Twicing Attention can notably and consistently outperform the baseline across all 15 types of natural corruption types (about ~10% improvement on \\\"contrast\\\", \\\"gaussian noise\\\", and \\\"impulse noise\\\" to name but a few). At the same time, nonetheless, we also see in Table 4 of the revised document that improvements on clean and contaminated data for language modeling are comparable.\\n2. **Additional computational cost for modest clean accuracy gain.** As mentioned in Point 1 above, the additional computation is also serving to obtain a significantly more robust model. In particular, notice how DeiT-Twicing is comparable to FAN against adversarial attacks while FAN introduces a more sophisticated architecture to achieve that. Additionally, refer to the relative improvements (\\\\%) provided in Point 1. It is worth noting that most tailored robust models available also introduce similar (or sometimes more) computational complexity compared to Twicing (added a new paragraph in Related Works for this comparison). This is also sometimes known as the robustness-efficiency trade-off (RETO) which is hardly avoidable.\\n4. **Narrow problem formulation **(respresntation collapse)**.** While we genuinely understand why some reviewers tend to think that the paper only deals with representation collapse due to our problem introduction style, we would like to reiterate an almost equally important subject of our paper--improving the underlying theoretical denoiser/estimator framework through the twicing procedure [3, 4]--which also ensures more robustness [1, 2, 4] as it helps the model learn more stable representations. Furthermore, another importance of such a theoretical observation along with empirical justification is that it could foster interesting future research to explore more similar frameworks to improve deep learning models in various aspects. In light of this concern, we have adjusted our introduction and the following sections to give a little more importance to the robustness of Twicing Attention both theoretically and empirically.\\n\\n### References:\\n[1]: Newey, W.K., F. Hsieh, and J.M. Robins (2004). \\\"Twicing Kernels and a Small Bias Property of Semiparametric Estimators.\\\" Econometrica, Vol. 72, No. 3, pp. 947\\u2013962.\\n\\n[2]: Chernozhukov, V., Escanciano, J. C., Ichimura, H., Newey, W. K., & Robins, J. M. (2022). Locally robust semiparametric estimation. Econometrica: Journal of the Econometric Society.\\n\\n[3]: Tukey, J.W. (1977). \\\"Exploratory Data Analysis\\\". Reading, MA: Addison-Wesley.\\n\\n[4]: Stuetzle, W., and Y. Mittal (1979): \\\"Some Comments on the Asymptotic Behavior of Robust Smoothers\\\", in Smoothing Techniques for Curve Estimation, Lecture Notes, 757. New York: Springer-Verlag, 191\\u2013195.\\n\\n[5]: Victor Chernozhukov, Juan Carlos Escanciano, Hidehiko Ichimura, Whitney K. Newey, and James M. Robins (2022): \\\"Locally robust semiparametric estimation\\\". Econometrica, 90(4):1501\\u20131535\\n\\n[6]: Caron, M., Touvron, H., Misra, I., J\\u00e9gou, H., Mairal, J., Bojanowski, P., & Joulin, A. (2021). Emerging properties in self-supervised vision transformers. Proceedings of the International Conference on Computer Vision (ICCV).\\n\\n[7]: Darcet, T., Oquab, M., Mairal, J., & Bojanowski, P. (2024). Vision transformers need registers. Published as a conference paper at ICLR 2024.\\n\\n-----\\n\\nWe are glad to answer any further questions you have on our submission.\"}",
"{\"title\": \"Any Questions from Reviewer pR7y on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"title\": \"Thanks for your endorsement!\", \"comment\": \"Thank you once again for your time and thoughtful feedback! We greatly appreciate your endorsement and suggestions regarding LLM fine-tuning. We will conduct the proposed experiments and incorporate additional analysis and evaluation of fine-tuning pre-trained off-the-shelf LLMs, such as LLaMA, using LLM evaluation benchmarks in our revision.\"}",
"{\"comment\": \"Thank you for your rebuttal and adding extra experiment for adding a hyperparameter to the twicing procedure. Your explanation partially addresses my questions and I am increasing my score to 6.\"}",
"{\"title\": \"One more comparison with alternative model that address oversmoothing\", \"comment\": \"On top of DeiT-NeuTRENO model compared in our previous response, we conducted an additional experiment with the FeatScale [10], another state-of-the-art vision transformer variant that tries to mitigate representation collapse. As shown in Table A2 below, our model DeiT-Twicing outperforms DeiT+FeatScale in both metrics on ImageNet classification. We report the same results in Table 9 of Appendix B.2 of our revision.\\n\\n**Table A2:** Top 1/Top 5 accuracies on clean ImageNet classification.\\n| Model | Top 1 | Top 5 |\\n|---|---|--\\n| DeiT | 72.00 | 91.14 |\\n| DeiT + FeatScale | 72.35 | 91.23 |\\n| DeiT-Twicing | **72.60** | **91.33** |\\n\\n[10]: Peihao Wang, Wenqing Zheng, Tianlong Chen, and Zhangyang Wang. Anti-oversmoothing in deep vision transformers via the fourier domain analysis: From theory to practice. In International Conference on Learning Representations, 2022.\\n___\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal. We would be happy to do any follow-up discussion or address any additional comments.\\n\\nIf you agree that our responses to your reviews have addressed the concerns you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments!\"}",
"{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q3. For pure curiosity, I would like to ask what the authors think the performance of this method would be in more extreme cases, which in this case refers to two main scenarios: first, the performance on LLMs with a very large number of parameters. Second, on non-classical Transformer structures, such as Linear Transformer and other analogs.**\\n\\n**Performance on LLMs:** To answer the question about the potential performance on LLMs with larger number of parameters, we trained a medium-sized model with 20.97M parameters compared to the small-model with 9.43M parameters. As a result, we observe that Transformer-Twicing still offers improvements across both validation and test perplexities indicating scaling properties about as good as the baseline Transformer. Also, in Figure 7 of the appendix, we show the training curves for the language models of both sizes, and observe that Twicing Attention helps the models converge relatively faster as well.\\n\\n**Table F:** Language modeling on Wikitext-103.\\n| Model | Validation PPL ($\\\\downarrow$) | Test PPL ($\\\\downarrow$) |\\n|-------|----------------|----------|\\n| Transformer (small)| 38.11 | 37.51\\n| +Twicing (small)| **37.12** | **36.69**\\n| Transformer (medium)| 31.98 | 26.17\\n| +Twicing (medium)| **30.91** | **25.65**\\n\\n**Extreme Unconventional Transformers:** Since the Twicing Attention's theoretical framework does not depend on how exactly the weight matrix is built, we believe that as long as any Transformer architecture-based model leverages an attention mechanism with a concrete similarity (attention) matrix that can be connected to either NLM denoising or nonparametric Nadaraya-Watson estimation as in the paper, Twicing is highly likely to offer extra representation capacity and robustness. In particular, as transformers with linear attention [9] are concerned, the implementation steps would involve denoting their separable similarity matrix in Eqn. (4) of [9] as $A$, and replacing it with the corresponding twicing weight matrix $2A-A^2$.\\n\\n**References:**\\n\\n[1]: Newey, W.K., F. Hsieh, and J.M. Robins (2004). \\\"Twicing Kernels and a Small Bias Property of Semiparametric Estimators.\\\" Econometrica, Vol. 72, No. 3, pp. 947\\u2013962.\\n\\n[2]: Chernozhukov, V., Escanciano, J. C., Ichimura, H., Newey, W. K., & Robins, J. M. (2022). Locally robust semiparametric estimation. Econometrica: Journal of the Econometric Society.\\n\\n[3]: Tukey, J.W. (1977). \\\"Exploratory Data Analysis\\\". Reading, MA: Addison-Wesley.\\n\\n[4]: Stuetzle, W., and Y. Mittal (1979): \\\"Some Comments on the Asymptotic Behavior of Robust Smoothers\\\", in Smoothing Techniques for Curve Estimation, Lecture Notes, 757. New York: Springer-Verlag, 191\\u2013195.\\n\\n[9]: Katharopoulos, A., Vyas, A., Pappas, N., & Fleuret, F. (2020). Transformers are RNNs: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML). PMLR.\\n\\n-----\\nWe hope we have cleared your concerns about our work. We have also revised our manuscript according to your comments, and we would appreciate it if we can get your further feedback at your earliest convenience.\"}"
]
} |
16O8GCm8Wn | Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances | [
"Shilin Lu",
"Zihan Zhou",
"Jiayou Lu",
"Yuanzhi Zhu",
"Adams Wai-Kin Kong"
] | Current image watermarking methods are vulnerable to advanced image editing techniques enabled by large-scale text-to-image models. These models can distort embedded watermarks during editing, posing significant challenges to copyright protection. In this work, we introduce W-Bench, the first comprehensive benchmark designed to evaluate the robustness of watermarking methods against a wide range of image editing techniques, including image regeneration, global editing, local editing, and image-to-video generation. Through extensive evaluations of eleven representative watermarking methods against prevalent editing techniques, we demonstrate that most methods fail to detect watermarks after such edits. To address this limitation, we propose VINE, a watermarking method that significantly enhances robustness against various image editing techniques while maintaining high image quality. Our approach involves two key innovations: (1) we analyze the frequency characteristics of image editing and identify that blurring distortions exhibit similar frequency properties, which allows us to use them as surrogate attacks during training to bolster watermark robustness; (2) we leverage a large-scale pretrained diffusion model SDXL-Turbo, adapting it for the watermarking task to achieve more imperceptible and robust watermark embedding. Experimental results show that our method achieves outstanding watermarking performance under various image editing techniques, outperforming existing methods in both image quality and robustness. Code is available at https://github.com/Shilin-LU/VINE | [
"AI Security",
"Watermark",
"Diffusion Model",
"Image Editing"
] | Accept (Poster) | https://openreview.net/pdf?id=16O8GCm8Wn | https://openreview.net/forum?id=16O8GCm8Wn | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ynaVqOpRua",
"wxFUtC8CdA",
"udUnoH7wkH",
"qMHtWVlPUV",
"oKWLujZKFu",
"mbuojorud8",
"lGMWnuxesU",
"iauultcKia",
"elEOZ8FRbN",
"ec1gpV7z1a",
"eKpPcxL3Vd",
"eIVIVOySok",
"c2xRzXzRC0",
"bpOqBs0A2H",
"b818G8NBHC",
"ZU41k7aeid",
"YIakoVJIQX",
"VwDMaWeuxr",
"Uq6jQuEiPw",
"OpBrf5Vbpz",
"Me5pAXfezS",
"LpmC7mj0i8",
"KfCwYdW7sP",
"KUcnK2KuoV",
"KA8porl4kn",
"Jd4XHxibqR",
"JVdTTKFKfw",
"IE9vTkMb2X",
"HVK4fbarIt",
"EdEIyJAQ6r",
"A1CZa6Y7Ki",
"9pszSZs6Jx",
"1PbjBoz1aF"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1732170486464,
1732691122083,
1737523389152,
1732606600997,
1732606732960,
1732156873606,
1734658154632,
1732606690101,
1730673222842,
1732159584402,
1732161115674,
1733065858541,
1732636783599,
1732156153684,
1732636251553,
1730629135015,
1732158504383,
1733047570018,
1732157323924,
1732159097275,
1732635006689,
1732156905409,
1730628760868,
1732634111504,
1732272136060,
1730651954909,
1732606653167,
1732272527770,
1732158560721,
1732157668328,
1732508257273,
1732694792870,
1730739837449
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_CgKQ"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Area_Chair_vAuG"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_QuDW"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_3J7W"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_QuDW"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_biAS"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_swi8"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_CgKQ"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_swi8"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_biAS"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_swi8"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Area_Chair_vAuG"
],
[
"ICLR.cc/2025/Conference/Submission293/Authors"
],
[
"ICLR.cc/2025/Conference/Submission293/Reviewer_3J7W"
]
],
"structured_content_str": [
"{\"title\": \"Response about practical value\", \"comment\": [\"Thank you for your question. Here are some practical examples of robust invisible watermarking in use:\", \"IMATAG: They aim to protect businesses from the unauthorized use of their images and videos. Visit their [homepage](https://www.imatag.com/) or check out their [X profile](https://x.com/imatag) for more information. In short, by embedding their robust invisible watermarks and utilizing their detector, IMATAG can (1) identify who is leaking your visual content; (2) monitor who is using your visual content; (3) verify the authenticity of your visual content.\", \"Adobe Integration: IMATAG is also integrated with Adobe software [here](https://exchange.adobe.com/apps/cc/101789/imatag-invisible-watermark-and-image-monitoring).\", \"Adobe Content Authenticity: It is designed to help creators protect and receive attribution for their work with Content Credentials. Content Credentials combine digital fingerprinting, invisible watermarking and cryptographically signed metadata, helping to ensure that Content Credentials remain intact and verifiable across the digital ecosystem [[blog link]](https://news.adobe.com/news/2024/10/aca-announcement).\", \"Google SynthID: Google uses its [SynthID](https://deepmind.google/discover/blog/identifying-ai-generated-images-with-synthid/) for Vertex AI customers.\", \"Please note that these technologies are not open-sourced. While they can protect digital content against common transformations, their robustness against AI-powered image editing is underexplored.\", \"If a malicious user uses generative models to remove or add a small element (such as a dog) to an artwork and publishes it without proper attribution, it could lead to copyright issues.\"]}",
"{\"comment\": \"Thank you for your detailed response. It has effectively addressed my concerns, allowing me to better appreciate and accept your viewpoint. I would like to increase the score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Dear Reviewer CgKQ,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper.\\n\\nAs the discussion period approaches its end, we would like to gently remind you that there are seven days remaining for any additional comments or questions. We would be grateful for the opportunity to address any further concerns you may have before the discussion phase concludes.\\n\\nThank you very much!\\n\\nMany thanks,\\n\\nThe Authors\"}",
"{\"comment\": \"Dear Reviewer 3J7W,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper.\\n\\nAs the discussion period approaches its end, we would like to gently remind you that there are seven days remaining for any additional comments or questions. We would be grateful for the opportunity to address any further concerns you may have before the discussion phase concludes.\\n\\nThank you very much!\\n\\nMany thanks,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer QuDW (1/2)\", \"comment\": \"We sincerely appreciate your recognition of our method as both reasonable and effective, as well as the solidity of our validation!\\n\\nWe will address all the concerns point by point.\\n***\\n**Weakness 2 & Question 1: Although the watermarking against Image Editing is interesting and novel, I cannot get the value of this task. Can you elaborate the perspective of this task?**\\n\\n_The Importance of Watermarking Images Against AI-Powered Image Editing_\\n\\nInvisible watermarks enable creators to assert ownership of their digital content. If a watermark can be easily removed through minor alterations (such as local editing) or visually imperceptible changes (like image regeneration), malicious users could remove or add a small element (such as a dog) to an artwork and publish it without proper attribution, which could lead to copyright issues. Such non-robust watermarks pose challenges for ownership assertion. \\n\\nIn contrast, editing-resistant watermarks can enhance security by embedding links that trace the source of leaks or unauthorized distributions back to the original distributor. For example, photographers and media companies can monitor the usage of their images, helping to identify unauthorized editing or redistribution. \\n***\\n**Weakness 1 & Question 2: (1) This paper lacks the validation of hypotheses in Line 249; (2) The author hypothesizes that a powerful generative prior can facilitate embedding information more invisibly while enhancing robustness (Line 249). Why hypothesize that? What are the assumptions based on?**\\n\\n1) _The validation of the hypothesis._ The validation is provided in the ablation study (Table 2 Config H). In Table 2, Config H is trained with randomly initialized weights instead of pretrained ones while retaining all other settings from Config G. Comparing Config H with Config G, it can be seen that the pretrained prior knowledge is beneficial for image quality, and thus it helps embedding information more invisibly.\\n\\n2) _Rationale for the Hypothesis._ This hypothesis is based on the expectation that the output of the watermarking process should visually resemble the input image, which is a natural, artifact-free image. Powerful generative priors, particularly large-scale diffusion models, have the capability to generate images within this natural, artifact-free manifold. SDXL-Turbo, distilled from its teacher model SDXL, inherits its generative capability. Thus, we expect that initializing with this strong generative prior will benefit the training process and significantly weaken the artifacts in watermarked images.\\n\\nWe hope this addresses the concerns you may have on this matter and remain available for further discussion!\\n***\\n**Weakness 3: The watermarking pattern existing in high-frequency bands after image blurring is not a new discovery. However, the author spends too much text on it.**\\n\\nThank you for your valuable suggestions. In response to your feedback, we have removed lines 212-213 of the initial version of our submission to reduce the discussion on blurring. Following your recommendation, now the emphasis has been shifted to the similarities between the frequency characteristics of image editing and blurring. This is a new discovery that we believe would be important and insightful for readers.\\n***\\n**Question 3: What is the purpose of finetuning VINE-B to VINE-R using Instruct-Pix2Pix? (Line 323)**\\n\\nThe purpose of incorporating Instruct-Pix2Pix, a representative editing model, into the finetuning process is to further boost the robustness against image editing. This integration is performed during finetuning because attempting to include the editing model at the initial training stage using a straight-through estimator (STE) results in convergence failures. \\n\\nIn fact, even without this finetuning step, VINE-B already outperforms the baseline models overall (see Figure 1). We are interested in determining whether a well-trained watermarking model can be finetuned with an editing model through STE to further improve robustness. Therefore, we implement this variant.\\n***\\n**Question 4: Why is the resolution not unified? (Line 1042)**\\n\\nThanks for your question. All baseline models were evaluated using their official checkpoints, which were trained at various resolutions. Table\\u202f3 presents the performance of these methods at their original training resolutions (thus not unified). The purpose of Table\\u202f3 is to demonstrate that scaling the resolution does not affect the quality of the encoded images. The results after scaling to the unified resolution are shown in Table\\u202f1. \\n\\nAdditionally, Figures\\u202f9 and\\u202f10 of the new version of our submission (or Figures 7 and 8 of the 1st version of our submission) illustrate that resolution scaling does not impact detection robustness. These evaluations were conducted to determine whether resolution scaling affects the original performance, as it is necessary to scale images to a unified 512\\u202f\\u00d7\\u202f512 resolution before inputting them into the editing models.\"}",
"{\"metareview\": \"This paper is working on robust watermarking. Authors first proposed W-Bench to evaluate the robustness of watermarking methods against a wide range of image editing techniques. Then proposed VINE to improve watermarking robustness against all these different edits. Experimental results show effectiveness of the proposed methods.\\n\\n5 reviewers unanimously considered this paper above the acceptance bar.\", \"strengths_of_this_paper_are\": \"1) clearly written and organized; 2) rigorous and solid evaluations; 3) proposed method is easy yet effective; 4) task is innovative and important.\", \"weaknesses_are\": \"1) unfair comparison for EditGuard; 2) lacks validation of some hypotheses; 3) lacks comparison; 4) need more experiments, etc;\\n\\nAfter rebuttal, reviewers' concerns are addressed. Some reviewers increased scores and some kept the scores. Given these, AC decide to accept this paper.\", \"additional_comments_on_reviewer_discussion\": \"After rebuttal, reviewers' concerns are addressed. Some reviewers increased scores and some kept the scores. Given these, AC decide to accept this paper.\"}",
"{\"comment\": \"Dear Reviewer QuDW,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper.\\n\\nAs the discussion period approaches its end, we would like to gently remind you that there are seven days remaining for any additional comments or questions. We would be grateful for the opportunity to address any further concerns you may have before the discussion phase concludes.\\n\\nThank you very much!\\n\\nMany thanks,\\n\\nThe Authors\"}",
"{\"summary\": \"This paper introduces W-Bench, the first comprehensive benchmark designed to evaluate the robustness of watermarking methods against a wide range of image editing techniques, including image regeneration, global editing, local editing, and image-to-video generation. Authors reveal that image editing and blurring distortion predominantly remove watermarking patterns in high-frequency bands, while those in low-frequency bands remain less affected. Based on this, distortions are used as surrogate attacks to overcome the challenges of using T2I models during training and to enhance the robustness of the watermark. The authors approach the watermark encoder as a conditional generative model and introduce two techniques to adapt SDXL-Turbo, a pretrained one-step T2I model, for the watermarking task. Experimental results demonstrate that VINE is robust against multiple image editing methods while maintaining high image quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe proposed method is easy yet effective. The combination of different losses is reasonable.\\n2.\\tThe validation of watermarking patterns in high-frequency bands after image editing and blurring is solid.\\n3.\\tThe experimental results show the proposed watermarking method is robust enough against multiple image editing methods.\", \"weaknesses\": \"1.\\tThis paper lacks the validation of hypotheses in Line 249.\\n2.\\tThe task of watermarking against Image Editing seems worthless.\\n3.\\tThe watermarking pattern existing in high-frequency bands after image blurring is not a new discovery. However, the author spends too much text on it.\", \"questions\": \"1. Although the watermarking against Image Editing is interesting and novel, I cannot get the value of this task. Can you elaborate the perspective of this task?\\n2. The author hypothesizes that a powerful generative prior can facilitate embedding information more invisibly while enhancing robustness (Line 249). Why hypothesize that? What are the assumptions based on?\\n3. What is the purpose of finetuning VINE-B to VINE-R using Instruct-Pix2Pix? (Line 323)\\n4. Why is the resolution not unified? (Line 1042) \\n5. Is VINE only work on the Image Editing task? What about other common watermarking tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer CgKQ (2/2)\", \"comment\": \"**Question 2: (1) The experimental results demonstrate that the proposed watermarking method, VINE, significantly enhances robustness against various image editing techniques. Has the author considered using representative image editing as an attack template, incorporating the associated attack loss as one of the objective functions during the training phase? (2) Alternatively, how might integrating the specific effects of image editing on watermarks into the design of the watermarking model influence the results of the watermarking algorithm?**\\n\\n(1) We have considered incorporating a representative image editing method into our noise layer during training. This approach has led to the development of our VINE-R variant, which is fine-tuned from VINE-B by integrating the Instruct-Pix2Pix pipeline during the training process.\\n\\n(2) Image editing tends to remove patterns present in high-frequency bands. Thus, in our preliminary design, we expected that integrating the effects of image editing into our design would involve deliberately forcing the pattern to appear in the low-frequency band by adding an additional loss in the frequency domain. This point is particularly intriguing. \\n\\nIn our preliminary experiments, we attempted to position the pattern in the low-frequency region by ensuring that the high-frequency bands of the Fourier spectra of the watermarked and original images are identical. However, this loss did not lead to a significant improvement in robustness and resulted in a substantial decrease in the quality of the encoded images. \\n\\nWe believe this outcome occurs because injecting all information into the low-frequency bands adversely affects the image quality. In contrast, allowing the model the flexibility to choose which frequency bands to inject the watermark helps balance robustness and image quality. Therefore, we decided not to include this frequency loss in our final design.\\n\\n***\\n**Question 3: In the experimental section, some of the differences between the subjective experimental results are difficult to discern visually. The author could consider selecting a subset of images and enlarging specific regions to facilitate reader comprehension.**\\n\\nThank you for your valuable suggestion! Based on your reference to the \\\"subjective experimental results,\\\" we believe you are referring to Figure 15 in the first version of our submission. In the revised submission, this figure has been renumbered to Figure 17 and enhanced with two additional columns that display enlarged central $40\\\\times40$ regions of the watermarked and residual images, respectively. If anything else is unclear or if other figures require further clarification, we are happy to make additional improvements.\\n\\n***\\nWe hope this addresses the concerns you may have and are always available for further discussion. We deeply appreciate the time you have taken to engage with our work and share your valuable insights.\"}",
"{\"title\": \"Practical value\", \"comment\": \"Thanks for the feedback. After reading the review of QuDW, I also started to doubt about the practical value of this field.\\n\\nCould the authors provide an example of a network-based method being practically applied? My research area is not in this field, but it seems to me that this field offers little practical value beyond papers and some handcraft metrics.\"}",
"{\"comment\": \"Dear Reviewer swi8,\\n\\nWe would like to express our gratitude for your valuable feedback on our submission and for indicating your willingness to increase the score to 8. Your support is greatly appreciated!\\n\\nWe noticed that the score adjustment hasn't been reflected in OpenReview yet. If it's not too much trouble, could you kindly update the score by editing your review in the OpenReview system?\\n\\nThank you once again for your time and valuable feedback!\\n\\nWarm regards,\\n\\nThe Authors\"}",
"{\"comment\": \"Dear Reviewer QuDW,\\n\\nThank you for dedicating your time and effort to reviewing our paper!\\n\\nIf you have any further concerns, please do not hesitate to let us know. We would greatly appreciate the opportunity to address any additional issues you may have before the discussion phase concludes.\\n\\nWarm regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer 3J7W\", \"comment\": \"We sincerely appreciate your recognition of our paper\\u2019s well organization and rigorous evaluation!\\n\\n***\\n**Weaknesses 1: EditGuard is primarily designed for editing detection, not robust watermarking, and it was not tested with its most robust configuration. This impacts the fairness of the evaluation, as EditGuard\\u2019s focus and strengths differ from VINE\\u2019s intended use.**\\n\\nWe appreciate your feedback and have updated Section 4.3 (lines 470-472) in the new submission accordingly. A new remark, highlighted in purple, has been added to clarify the primary focus of EditGuard's design. For your convenience, the remark is also provided below.\\n\\nEditGuard is not designed for robust watermarking against image editing, as it is trained with mild degradation. Instead, it offers a feature for tamper localization, enabling the identification of edited regions.\\n\\n***\\nIf you believe further improvements are needed, we would be happy to make additional revisions!\"}",
"{\"comment\": \"Thank you for your response. I would like to keep the previous score.\"}",
"{\"summary\": \"This paper introduces an image watermarking benchmark, specifically aiming to evaluate the watermark robustness against four image editing methods. In addition, an image watermarking that is robust against image editing is proposed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper focuses on the image watermark robustness against image editing, which is important but has rarely been explored.\\n2. The proposed benchmark includes different types of image editing approaches, rendering it comprehensive to some extent.\\n3. The proposed SDXL-Turbo-based robust image watermarking method is novel, and the experiments demonstrate its effectiveness.\\n4. The paper is overall well-written.\", \"weaknesses\": \"1. The benchmark only considers four types of image editing methods (image regeneration, global editing, local editing, and image-to-video generation). Other image editing methods such as style transfer are not considered.\\n2. Only one image-to-video generation method is included in the benchmark. The robustness against other image-to-video generation methods such as [1] is not evaluated.\\n\\n\\n[1] Hu, Yaosi, Chong Luo, and Zhenzhong Chen. \\\"Make it move: controllable image-to-video generation with text descriptions.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\", \"questions\": \"1. What is the reason for choosing only these four types of image editing methods (image regeneration, global editing, local editing, and image-to-video generation) to evaluate the image watermarking robustness, against image editing?\\n2. What is the motivation for using SDXL-Turbo as the generative prior for watermark encoding? If it is just to avoid multi-step sampling, there should be lots of one-step generative models to choose from, for example, the SDXS [2]. \\n\\n[2] Song, Yuda, Zehao Sun, and Xuanwu Yin. \\\"SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions.\\\" arXiv preprint arXiv:2403.16627 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer biAS (1/2)\", \"comment\": \"We sincerely appreciate your recognition of our focus on the important and underexplored area of image watermark robustness against editing, the comprehensiveness of our benchmark, the novelty and effectiveness of our method, and the overall quality of our writing!\\n\\nWe will address all the concerns point by point.\\n\\n***\\n**Weakness 1 & Question 1: What is the reason for choosing only these four types of image editing methods (image regeneration, global editing, local editing, and image-to-video generation) to evaluate the image watermarking robustness, against image editing?**\\n\\nThank you for your question. We provide an explanation below and have also included it in Appendix G.1 (highlighted in red) to facilitate readers\\u2019 understanding.\\n\\nThese four types of editing encompass the majority of editing needs. Image editing can be broadly categorized into global and local editing, as user edits typically affect either the entire image or a specific part of it.\\n\\n- Global editing involves altering most of an image's pixels while maintaining its overall layout, components, or semantics. Techniques such as style transfer, cartoonization, image translation, and scene transformation fall under this category and produce similar effects. For example, using prompts like \\\"turn it into Van Gogh style\\\" or \\\"convert it into a sketchy painting\\\" in an image editing model can effectively achieve style transfer.\\n\\n- Local editing, on the other hand, refers to modifications applied to specific elements, semantics, or regions within an image. This category includes image inpainting, image composition, object manipulation, attribute manipulation, and so forth.\\n\\n- While image regeneration and image-to-video generation are not strictly considered forms of image editing, they can be used to create similar digital content while removing watermarks, thereby posing a threat to copyright protection. For this reason, we have included them in our benchmark.\\n\\n***\\n**Weakness 2: Only one image-to-video generation method is included in the benchmark. The robustness against other image-to-video generation methods such as [1] is not evaluated.**\\n\\nThank you for highlighting another image-to-video generation model (MAGE [1]). Following your recommendation, we have conducted experiments on it.\\n\\nSince MAGE [1] cannot process natural images, we are unable to use our benchmark images for evaluation. Instead, we utilize the CATER-GEN-v2 dataset proposed in their paper, which is a more complex version compared to CATER-GEN-v1. This dataset contains three to eight objects per video, with each object having four attributes randomly selected from five shapes, three sizes, nine colors, and two materials.\\n\\nWe add watermarks to 1,000 images from the CATER-GEN-v2 dataset and use the MAGE+ model to perform text-image-to-video (TI2V) generation, producing 10-frame videos. For watermark detection, we analyze frames 2, 4, 6, 8, and 10. The average detection accuracies across 1,000 videos are presented in the table below and are also included in Appendix G.4 (Table 6) (highlighted in red).\\n\\n***\\n| Method | Frame 2 | Frame 4 | Frame 6 | Frame 8 | Frame 10 | Average |\\n|--------------|---------|---------|---------|---------|----------|---------|\\n| MBRS | 89.57 | 88.67 | 87.45 | 86.51 | 84.42 | 87.32 |\\n| CIN | 45.92 | 44.78 | 43.21 | 42.17 | 40.71 | 43.36 |\\n| PIMoG | 78.23 | 76.99 | 75.72 | 74.91 | 73.12 | 75.79 |\\n| RivaGAN | 56.87 | 54.83 | 53.21 | 52.14 | 51.01 | 53.61 |\\n| SepMark | 63.45 | 62.15 | 61.03 | 60.89 | 59.24 | 61.35 |\\n| DWTDCT | 30.57 | 29.51 | 28.89 | 27.72 | 26.87 | 28.71 |\\n| DWTDCTSVD | 38.56 | 38.54 | 37.12 | 36.81 | 35.74 | 37.35 |\\n| SSL | 81.21 | 80.95 | 78.99 | 77.18 | 76.12 | 78.89 |\\n| StegaStamp | 91.25 | 90.34 | 89.12 | 88.67 | 87.33 | 89.34 |\\n| TrustMark | 90.35 | 90.12 | 89.45 | 87.69 | 86.13 | 88.75 |\\n| EditGuard | 42.57 | 41.46 | 40.55 | 39.91 | 38.17 | 40.53 |\\n| VINE-Base | 92.22 | 91.35 | 90.74 | 89.12 | 88.01 | 90.29 |\\n| VINE-Robust | 93.14 | 92.88 | 91.32 | 90.27 | 89.12 | 91.35 |\\n***\\n\\nInterestingly, we found that the detection accuracies of most watermarking models are higher compared to when testing with the SVD. We attribute this to the simplicity of the dataset: the background remains mostly unchanged without significant camera motion, and only a few objects move while most remain static. This makes detection easier than in the SVD case, which typically involves camera motion effects. In this case, VINE still outperforms other watermarking models.\"}",
"{\"comment\": \"OK\\uff0cI am willing to increase the score to 8.\"}",
"{\"title\": \"Response to Reviewer swi8 (1/2)\", \"comment\": \"We sincerely appreciate your recognition of the innovation in our task and the use of generative priors, as well as your acknowledgment of the comprehensiveness of our evaluation benchmark!\\n\\nWe will address all the concerns point by point.\\n\\n***\\n**Weakness 1: TreeRing, Gaussian Shading, and RingID, which add watermarks in the frequency domain of the initial noise, are generally considered robust against image editing (e.g., prompt2prompt) and regeneration. This paper lacks this crucial comparison. If these methods are also robust to image editing, the contribution of this paper may be diminished.**\\n\\nThank you for your insightful questions. TreeRing, Gaussian Shading, and RingID are well-known in-generation watermarking techniques designed for watermarking generated images. However, these methods are not applicable to real images and therefore cannot be used for copyright protection of authentic photographs. \\n\\nExperiments are conducted to verify this. Specifically, we apply DDIM/DPM inversion techniques to extract the initial noise from a real image, add a watermark using these methods, and then invert back to obtain the image, the resulting image would differ significantly from the original one, undermining the photographer\\u2019s intent to protect their work. The visual results are shown in Figure 6 of the new version of our submission.\\n\\nConsequently, while these methods are highly effective for watermarking generated images, they fall outside the scope of our study. All of our baseline methods are capable of adding watermarks to real images. We have also included a discussion of these well-known in-generation watermarking methods in Appendix A.1. We kindly invite you to review that section, and we hope this addresses the concerns you may have on this matter. Your feedback is greatly appreciated!\"}",
"{\"title\": \"Response to Reviewer CgKQ (1/2)\", \"comment\": \"We sincerely appreciate your recognition of the significance of our benchmark and its benefits for future research, as well as your acknowledgment that our paper is clearly articulated and well-supported!\\n\\nWe will address all the concerns point by point.\\n\\n***\\n\\n**Weakness 1: The paper explains the reasons behind the watermarking algorithm's resistance to image editing from the perspective of the frequency domain. It notes that the watermarking methods exhibiting high robustness against image editing in certain scenarios display prominent patterns in the low-frequency bands, which aligns with the general understanding of watermark robustness. However, the paper primarily focuses on the robustness of watermarking methods against image editing techniques based on generative models. Therefore, summarizing the unique effects of such image editing techniques on the watermark is more meaningful.**\\n\\nThank you for your valuable suggestions.\\n\\nIn response to your feedback, we added a new Figure\\u202f8 in Appendix\\u202fB (in the new version of our submission) to illustrate the impact of different image editing methods on various watermark patterns within the frequency domain. As shown, the frequency patterns of VINE-R, VINE-B, MBRS, and StegaStamp are less affected compared to their original patterns (shown in Figure 7) than those of other watermarking methods. We believe this provides interesting and insightful information for our readers, and we are truly grateful for your input in enhancing the clarity of our work.\\n\\nAdditionally, we retain Figure 3 to help readers better understand the impact of image editing in a more disentangled manner, as it can be challenging to isolate the effects on low-, mid-, and high-frequency bands solely through the visual representation in Figure 8.\\n\\nIf any aspects remain unclear or if further clarification of other figures is needed, we would be happy to make additional improvements!\\n\\n***\\n**Weakness 2 & Question 1: Figure 6 in the appendix shows that VINE exhibits higher brightness in the central region, providing evidence for why the proposed watermarking method demonstrates strong robustness against image editing. If the author can thoroughly elucidate the principles underlying this phenomenon, it may address the previously mentioned issue of \\\"a disconnect between the author's analysis of watermark robustness and the design of the watermark model.\\\"**\\n\\nThank you for your insightful question. The observation that VINE exhibits higher brightness in the low-frequency bands is indeed interesting, though this was not the motivation behind our design choices. Allow us to walk you through our design and analysis process.\\n\\nOur two key design elements (surrogate layer & generative prior adaptation) are based on the two following observations: (1) Image editing and image blurring exhibit similar frequency characteristics. (2) Image watermarking can be viewed as a form of conditional generation, where a generative prior can enhance image quality by making watermarks less visible. The surrogate layer enhances our model's robustness to image editing, while the generative prior improves the quality of the encoded images. Therefore, our design is grounded in these analyses.\\n\\nRegarding the intriguing correlation you mentioned\\u2014the robustness of our watermarking model against image editing being highly positively correlated with pattern intensity in the low-frequency region\\u2014this emerged from our training process rather than driving our design decisions.\\n\\nIn summary, this finding is a byproduct of our design rather than a deliberate motivation of it. We did not intend to concentrate the watermark pattern in the low-frequency region. In fact, when we attempted to do so by introducing an additional frequency loss, it resulted in poorer performance, which we discuss in the next question. This finding could offer valuable insights for future research aimed at achieving a better balance between robustness and encoded image quality. \\n\\nWe have included this discussion in Appendix B of the new version of our submission (highlighted in green) to enhance readers' understanding. Thank you for helping to make our work more complete!\"}",
"{\"comment\": \"Dear Reviewer swi8,\\n\\nThank you very much for your positive feedback on our revisions! We are delighted to hear that your concerns have been essentially addressed.\\n\\nAs the current scoring system does not include an option for a score of\\u202f7, we kindly ask if you would consider adjusting your score to\\u202f8 in the system, if possible.\\n\\nShould you have any further questions or suggestions, please do not hesitate to let us know. We greatly appreciate your valuable contributions throughout the review process!\\n\\nWarm regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer QuDW (2/2)\", \"comment\": \"**Question 5: Is VINE only work on the Image Editing task? What about other common watermarking tasks?**\\n\\nThank you very much for highlighting this important question! We fully agree with the significance of this aspect of our method. VINE also remains robust against common degradations such as JPEG compression, Gaussian noise, contrast modifications, brightness adjustments, and more. For a comprehensive overview of its performance, we kindly direct your attention to Figure 9 in the revised version of our submission (previously Figure 7). We sincerely hope this helps address your concerns, and we are always available to discuss this further if needed. Your feedback is greatly appreciated!\\n***\\n\\nIf the reviewer believes further improvements are needed, we would be happy to make additional revisions!\"}",
"{\"summary\": \"The paper evaluates eleven watermarking methods against prevalent image editing techniques and demonstrates that most methods fail to detect watermarks after such edits. It also introduces a watermarking model based on SDXL-Turbo, which exhibits high robustness against these editing methods while maintaining high image quality.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents the first holistic benchmark that incorporates four types of image editing techniques to assess the robustness of watermarking methods. This is significant for evaluating the robustness of future watermarking methods, as it helps to promote the standardization and comprehensiveness of robustness assessments. By addressing a critical gap in evaluating watermark resilience against sophisticated transformations enabled by modern generative models, this work encourages researchers in the field of image watermarking to focus on the robustness of their methods against emerging image editing technologies, including image regeneration, global editing, local editing, and image-to-video generation. Overall, the paper is clearly articulated and well-supported.\", \"weaknesses\": \"1. The paper explains the reasons behind the watermarking algorithm's resistance to image editing from the perspective of the frequency domain. It notes that the watermarking methods exhibiting high robustness against image editing in certain scenarios display prominent patterns in the low-frequency bands, which aligns with the general understanding of watermark robustness. However, the paper primarily focuses on the robustness of watermarking methods against image editing techniques based on generative models. Therefore, summarizing the unique effects of such image editing techniques on the watermark is more meaningful.\\n2. We observe that the proposed watermarking method, VINE, shows higher brightness in the central region of the frequency domain, which corresponds to the author's analysis of watermark robustness. However, the paper does not clarify why this watermarking model based on SDXL-Turbo exhibits such characteristics, leading to the author's specific design of the watermark algorithm. In other words, there seems to be a disconnect between the author's analysis of watermark robustness and the design of the watermark model.\", \"questions\": \"1.Figure 6 in the appendix shows that VINE exhibits higher brightness in the central region, providing evidence for why the proposed watermarking method demonstrates strong robustness against image editing. If the author can thoroughly elucidate the principles underlying this phenomenon, it may address the previously mentioned issue of \\\"a disconnect between the author's analysis of watermark robustness and the design of the watermark model.\\\"\\n\\n2.The experimental results demonstrate that the proposed watermarking method, VINE, significantly enhances robustness against various image editing techniques. Has the author considered using representative image editing as an attack template, incorporating the associated attack loss as one of the objective functions during the training phase? Alternatively, how might integrating the specific effects of image editing on watermarks into the design of the watermarking model influence the results of the watermarking algorithm?\\n\\n3. In the experimental section, some of the differences between the subjective experimental results are difficult to discern visually. The author could consider selecting a subset of images and enlarging specific regions to facilitate reader comprehension.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the detailed response, which has essentially resolved my concern. I am willing to increase the score to 7.\"}",
"{\"comment\": \"I think all of my concerns are well addressed. I would like to raise my rating.\"}",
"{\"summary\": \"This paper introduces a new evaluation benchmark, W-Bench, designed to test the robustness of image watermarking methods under image editing supported by large-scale generative models. W-Bench includes image regeneration, global editing, local editing, and image-to-video generation. The authors also propose VINE, a watermarking method utilizing generative priors to enhance the robustness and visual quality of watermark embedding. Experiments show that VINE outperforms existing watermarking methods across various image editing techniques.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Comprehensive Evaluation Framework: W-Bench covers a variety of image editing techniques, providing a comprehensive platform for assessing the robustness of watermarking methods.\\n\\n2. Innovative Use of Generative Priors: VINE embeds watermarks by adapting pretrained large-scale generative models, making the embedding more imperceptible and robust.\\n\\n3. This task is innovative, focusing on watermarking that is robust against image editing methods.\", \"weaknesses\": \"TreeRing, Gaussian Shading, and RingID, which add watermarks in the frequency domain of the initial noise, are generally considered robust against image editing (e.g., prompt2prompt) and regeneration. This paper lacks this crucial comparison. If these methods are also robust to image editing, the contribution of this paper may be diminished.\", \"reference\": \"1. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust\\n2. Ringid: Rethinking tree-ring watermarking for enhanced multi-key identification\\n3. Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models\", \"questions\": \"1. I have doubts about the results in Figure 5(a). The experimental results show that 250-step noise in image regeneration can significantly disrupt the watermark\\uff08bit acc). Does this mean that global image editing (e.g., SDedit, prompt2prompt) with 250 steps can also completely remove the watermark? If so, I believe this result does not demonstrate robustness, as global image editing often uses even more denoising steps.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer swi8,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper.\\n\\nAs the discussion period approaches its end, we would like to gently remind you that there are seven days remaining for any additional comments or questions. We would be grateful for the opportunity to address any further concerns you may have before the discussion phase concludes.\\n\\nThank you very much!\\n\\nMany thanks,\\n\\nThe Authors\"}",
"{\"comment\": \"Thank you for your prompt response and positive feedback on our revisions. Please let us know if you have any further questions or suggestions.\\n\\nWe also appreciate your valuable contributions throughout the review process!\"}",
"{\"title\": \"Response to Reviewer biAS (2/2)\", \"comment\": \"**Question 2: What is the motivation for using SDXL-Turbo as the generative prior for watermark encoding? If it is just to avoid multi-step sampling, there should be lots of one-step generative models to choose from, for example, the SDXS [2].**\\n\\nThe motivation for using one-step diffusion models is to assess whether a powerful generative prior enhances watermarking performance. We chose SDXL-Turbo as the first choice because we believe it is a highly representative one-step generative model in all, and our work currently takes the initial steps to verify whether a commonly used generative prior is beneficial for watermarking. \\n\\nWe appreciate that you suggested using other potentially better one-step or few-step models, such as SDXS, LCM, InstaFlow, SD3.5, SwiftBrush, DMD, UFOGen, or Flux.Shenell. Although our SDXL-Turbo-based model has already advanced watermarking performance, we believe investigating alternative models with better prior performance could be a valuable direction for future research!\\n\\n***\\nWe hope this addresses any concerns you may have. If you feel further improvements are necessary, we would be happy to make additional revisions!\\n\\n***\\n[1] Hu, Yaosi, Chong Luo, and Zhenzhong Chen. \\\"Make it move: controllable image-to-video generation with text descriptions.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[2] Song, Yuda, Zehao Sun, and Xuanwu Yin. \\\"SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions.\\\" arXiv preprint arXiv:2403.16627 (2024).\"}",
"{\"title\": \"Response to Reviewer swi8 (2/2)\", \"comment\": \"**Question 1: I have doubts about the results in Figure 5(a). The experimental results show that 250-step noise in image regeneration can significantly disrupt the watermark\\uff08bit acc). Does this mean that global image editing (e.g., SDedit, prompt2prompt) with 250 steps can also completely remove the watermark?**\\n\\nWe apologize for any confusion. \\n\\nThe key point is that an imperfect bit accuracy of approximately 80% does not indicate that the watermark has been compromised. Even if an image achieves an extracted bit accuracy of around only 80%, it still has a very high probability\\u2014potentially exceeding 99%\\u2014of being flagged as watermarked in the statistical test. Therefore, the 250-step noise applied during image regeneration does not disrupt the watermark.\\n\\nThe following provides a detailed introduction to the statistical test.\\n***\\nLet $\\\\boldsymbol{w} \\\\in \\\\\\\\{0,1\\\\\\\\}^k$ be the $k$-bit ground-truth watermark. The watermark $\\\\boldsymbol{w}^\\\\prime$ extracted from a watermarked image $\\\\boldsymbol{x}_w$ is compared with the ground-truth $\\\\boldsymbol{w}$ for detection. The detection statistical test relies on the number of matching bits $M(\\\\boldsymbol{w}, \\\\boldsymbol{w}^\\\\prime)$: If\\n$$\\nM(\\\\boldsymbol{w}, \\\\boldsymbol{w}^\\\\prime) \\\\ge \\\\tau, \\\\text{where} \\\\ \\\\tau \\\\in \\\\\\\\{0,1,2, \\\\cdots, k\\\\\\\\},\\n$$\\nthen the image is flagged as watermarked. Formally, we test the statistical hypothesis $H_1$: '$\\\\boldsymbol{x}$ contains the watermark $\\\\boldsymbol{w}$' against the null hypothesis $H_0$: '$\\\\boldsymbol{x}$ does not contain the watermark $\\\\boldsymbol{w}$'. Under $H_0$ (i.e., for original images), if the extracted bits $\\\\boldsymbol{w} = \\\\\\\\{w_1^\\\\prime, w_2^\\\\prime, \\\\cdots, w_k^\\\\prime\\\\\\\\}$ (where $w_i^\\\\prime$ is the i-th extracted bit) from a model are independent and identically distributed (i.i.d.) Bernoulli random variables with the matching probability $p_o$, then $M(\\\\boldsymbol{w}, \\\\boldsymbol{w}^\\\\prime)$ follows a binomial distribution with parameters $(k, p_o)$. This assumption is verified by [1].\\n\\nThe false positive rate (FPR) is the probability that $M(\\\\boldsymbol{w}, \\\\boldsymbol{w}^\\\\prime)$ takes a value bigger than the threshold $\\\\tau$ under the null hypothesis $H_0$. It is obtained from the CDF of the binomial distribution, and a closed-form can be written with the regularized incomplete beta function $I_p(a,b)$ [1]:\\n$$\\n\\\\text{FPR}(\\\\tau) = \\\\mathbb{P} \\\\left( M(\\\\boldsymbol{w}, \\\\boldsymbol{w}^\\\\prime)>\\\\tau|H_0 \\\\right) = \\\\sum_{i=\\\\tau + 1}^{k} \\\\binom{k}{i} p_o^i(1-p_o)^{k-i} = I_{p_o}(\\\\tau + 1, k - \\\\tau),\\n$$\\nwhere under $H_0$ (i.e., images without watermark $\\\\boldsymbol{w}$), $p_o$ should ideally be close to 0.5 to minimize the risk of false positive detection. \\n\\nSimilarly, the true positive rate (TPR) represents the probability that the number of matching bits exceeds the threshold $\\\\tau$ under $H_1$, where the image contains the watermark. Thus, the TPR can be calculated by:\\n$$\\n\\\\text{TPR}(\\\\tau) = \\\\mathbb{P} \\\\left( M(\\\\boldsymbol{w}, \\\\boldsymbol{w}^\\\\prime)>\\\\tau | H_1 \\\\right) = \\\\sum_{i=\\\\tau + 1}^{k} \\\\binom{k}{i} p_w^i(1-p_w)^{k-i} = I_{p_w}(\\\\tau + 1, k - \\\\tau),\\n$$\\nwhere under $H_1$ (i.e., images with watermark $\\\\boldsymbol{w}$), $p_w$ should ideally be high enough (e.g., exceeding 0.8) to ensure the effectiveness of a watermarking model.\\n\\nTo further demonstrate that _neither high bit accuracy nor AUROC alone guarantees a high TPR at a low FPR_, consider the following example. Suppose we have a 100-bit watermarking model with a threshold $\\\\tau$ of 70 to determine whether an image contains watermark $\\\\boldsymbol{w}$. If the model extracts bits from watermarked images with a matching probability $p_w = 0.8 $ and from original images with a matching probability $p_o = 0.5 $, the resulting FPR would be $ 1.6 \\\\times 10^{-5} $ and the TPR would be 0.99. In this scenario, even though the bit accuracy for watermarked images is not exceptionally high (e.g., below 0.9), the model still achieves a high TPR at a very low FPR. In contrast, if another model has $ p_w = 0.9 $ and $ p_o = 0.7 $, achieving the same FPR would require setting the threshold $\\\\tau$ to 87. Under these conditions, the TPR would only be 0.8. This example demonstrates that high bit accuracy for watermarked images does not necessarily ensure a high TPR when maintaining a low FPR. Therefore, relying solely on bit accuracy or AUROC may not be sufficient for achieving the desired performance in watermark detection.\\n\\n***\\nWe have included this information in Appendix C in the new version of our submission (highlighted in blue) to enhance reader understanding. We kindly invite you to review it and hope that it addresses the concerns you may have regarding this matter. If you believe further improvements are needed, we would be happy to make additional revisions. Thank you for helping to make our work more complete!\\n\\n[1] The stable signature: Rooting watermarks in latent diffusion models.\"}",
"{\"comment\": \"Hi Reviewers,\\n\\nWe are approaching the deadline for author-reviewer discussion phase. Authors has already provided their rebuttal. In case you haven't checked them, please look at them ASAP. Thanks a million for your help!\"}",
"{\"comment\": \"Thank you for your positive feedback on our revisions. We also deeply appreciate your valuable contributions throughout the review process!\"}",
"{\"summary\": \"This paper presents VINE, a watermarking method designed to withstand various image editing techniques enabled by advanced generative models. It also introduces W-Bench, a benchmark that evaluates watermark robustness against multiple types of edits, making it a valuable resource for watermarking research.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is clearly written and organized, with effective figures explaining both W-Bench and VINE.\", \"The paper provides rigorous evaluations, testing VINE and eleven other watermarking models on diverse editing techniques.\"], \"weaknesses\": [\"EditGuard is primarily designed for editing detection, not robust watermarking, and it was not tested with its most robust configuration. This impacts the fairness of the evaluation, as EditGuard\\u2019s focus and strengths differ from VINE\\u2019s intended use.\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}"
]
} |
15lk4nBXYb | CCM-DiT: Camera-pose Controllable Method for DiT-based Video Generation | [
"Yuelei Wang"
] | Despite the significant advancements made by Diffusion Transformer (DiT)-based methods in video generation, there remains a notable gap with camera-pose perspectives. Existing works such as OpenSora do not adhere precisely to anticipated trajectories, thereby limiting the utility in downstream applications such as content creation.
Therefore, we introduce a novelty approach that achieves fine-grained control by embedding sparse camera-pose information into the temporal self-attention layers. We employ LoRA to minimize the impact on the original attention layer parameters during fine-tuning and enhance the supervision of camera-pose in the loss function.
After fine-tuning the OpenSora’s ST-DiT framework on the RealEstate10K dataset, experiments demonstrate that our method outperforms LDM-based methods for long video generation, while maintaining optimal performance in trajectory consistency and object consistency. | [
"Video Generation",
"Diffusion Models"
] | https://openreview.net/pdf?id=15lk4nBXYb | https://openreview.net/forum?id=15lk4nBXYb | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"iZqXRNGjEM",
"gmJNWsYFIc",
"Th5ZiffgG3",
"P647NELrlI",
"6Q2jgAhW0N"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730822236924,
1730542372664,
1730003031204,
1731617097354,
1730217174817
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13388/Reviewer_g6Tj"
],
[
"ICLR.cc/2025/Conference/Submission13388/Reviewer_J1Rw"
],
[
"ICLR.cc/2025/Conference/Submission13388/Reviewer_pqnn"
],
[
"ICLR.cc/2025/Conference/Submission13388/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13388/Reviewer_6DBB"
]
],
"structured_content_str": [
"{\"summary\": \"This paper presents an approach to enable controlling camera pose for video generation based on Diffusion Transformer (DiT). It converts pixel-wise motion field based on Plucker coordinates into a sparse motion field, which is then injected into the temporal attention part of DiT. LoRA is used to fine-tune a pre-trained DiT model (Open-Sora). Experimental results on the RealEstate10K dataset are reported.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The idea of converting camera poses into pixel-wise embeddings is novel, which allow the video generation model to effectively understand the camera motion.\\n\\n2. This paper studies three common ways of incorporating camera pose embedding into a DiT model, which could be useful for future work.\", \"weaknesses\": [\"1. The proposed sparse motion encoding module seems almost identical to the standard pixel-wise Plucker embedding. Compared with Eq. (3), the only difference in Eq. (4) is the embedding computation are performed on a set of sparse locations controlled by $s_x$ and $s_y$. It is not clear how the camera \\\"motion\\\" is encoded. Does the proposed approach convert the pixel-wise motion vectors shown in Fig. 4 into embeddings?\", \"2. A lot of definitions are not clear and math symbols are not used rigorously in the presentation, making the paper hard to follow and understand.\", \"a) The Sparse Motion Encoding Module and Temporal Attention Injection Module are not shown in Fig. 1 at all.\", \"b) In line #179 on page 4, how is $RT$ defined? Is it matrix multiplication similar to $RK^{-1}$?\", \"c) In line #195 on page 4, it says $F_s\\\\in \\\\mathbb{R}^{L\\\\times M\\\\times N}$? According to the definition in Eq.(4), the channel dimension shouldn't be 1?\", \"d) In Fig. 2, $c_p$, $c_s$, and $c_l$ are not defined in the main text. And the shape of $c_l$ is not clearly explained either.\", \"e) In Fig. 3, what are $s$, $p$, and $p_m$?\", \"3. The first two items of the claim contributions of the paper are essentially identical. Both of them are about incorporating camera poses into a DiT model.\", \"4. Only visual results of simple camera motion (zoom in, zoom out, and roundabout) are shown in the paper. No supplementary results are available. It is therefore hard to gauge the effectiveness of the proposed approach.\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents an approach to training a DiT-based video diffusion model (OpenSora) to control the camera motions of generated videos.\\nThe camera motion, which is represented by Pl\\u00fccker coordinates, is first sparsely sampled (downsampled by x40) and then encoded into the \\\"motion latent\\\" via VAE encoder (MegViT-v2, inspired by Tora). Finally, motion latent is injected into the temporal attention layer of DiT via adaptive normalization. The model is finetuned on 16-frame videos from the RealEstate10K dataset. The authors demonstrate that visual quality and motion accuracy (FID, FVD, CamMC) outperformed baselines for 72 frame generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"To the best of my knowledge, this is the first work that tackles camera-motion-controlled video generation with open-source DiT (i.e. opensora).\", \"The idea of sparsely sampling motion fields before inputting them into the VAE encoder is new.\", \"They demonstrate that adaptive normalization for conditioning camera motions is the effective strategy for camera-conditioned video generation for the first time, which is consistent with the results demonstrated in trajectory-controlled generation (e.g. Tora).\", \"They quantitatively demonstrate that the generated videos have better motion and visual quality for 72-frame generation.\"], \"weaknesses\": \"Novelty:\\n\\n- As the authors have already acknowledged, the idea of using Plucker coordinates has already been introduced (e.g., VD3D). Additionally, the use of a VAE encoder and adaptive normalization has already been introduced by Tora. Following that, the main technical contribution is introducing a sparsely sampled motion field. The author argues that sparsely sampled motion fields contribute to performance improvement, but the authors fail to provide details results (e.g., visual results, more ablation study in Table 2, what about x1?) nor detailed motivation. Additionally, choosing this downsample factor seems heuristic with no intuitive justifications. I would appreciate it if the authors could provide more ablation studies and technical motivations for applying sparsely sampled motion fields.\", \"experiments\": [\"Although the model is trained on a 16-frame dataset, the model performs worse than other baselines for 16-frame generation. Additionally, the motivation for training only on 16-frame videos is unclear, given the model is tasked to generate longer-frame videos during the experiments. I would appreciate it if the authors could provide more explanations for this decision.\", \"The authors did not provide sufficient qualitative results or user study, where the superiority of their method is not convincing.\"], \"clarity\": [\"The paper lacks implementation details. For instance, I am not sure how adaptive normalization and cross-attention are performed exactly. Figure 3 seems inaccurate because the injection happens between two temporal attention layers, where the temporal attention layer should exist in only one of these locations. Please see the questions below for more requests for clarification.\"], \"questions\": [\"Could the authors provide a user study to assess the quality?\", \"Could the authors upload the generated videos and visual comparisons with baselines?\", \"Could the authors provide the implementation details of cross-attention and adaptive normalization? Including where these computations happen in relation to temporal attention computation. (In Figure 3, the injection happens between two temporal attention layers. This figure is wrong to me as exactly one temporal attention should exist either before or after the injection.)\", \"Could the authors provide the reasons why the models are trained on only 16-frame videos?\", \"Could the authors detail how the model is extended to 72 frames given the model is trained on 16 frames?\", \"The authors mention that \\\"object movement tends to be limited to small-scale motions\\\". I think this can be a big issue. Could the authors provide a detailed comparison with other baselines?\", \"Is the motivation for introducing sparse sampling of the motion field for computational efficiency? The authors lately argue that sparse sampling improves the results, but I am not convinced why the performance is improved. Could the authors provide more detailed reasons behind that?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"In this paper, the authors propose a DiT-based video generation method that embeds the camera poses as controllable signals. They use LoRA to fine-tune the attention layer parameters in the training. RealEstate10K dataset is used in the evaluation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The authors add the camera pose in the video generation, which is an interesting point.\", \"weaknesses\": \"1. Many details are missing in Fig 2. I do not find any explanations of what does frame pose mean and how to get the frame pose. How to convert frame pose to camera pose is also very unclear. It would be helpful to provide a step-by-step description of how to extract and process the pose information from the dataset, including definitions of frame pose and camera pose, and the conversion process between them.\\n2. The method is only evaluated on a single dataset, which is not sufficient to verify the effectiveness of the method. For example, authors can test on videos from WebVid and HD-VILA following MotionCtrl [1] paper.\\n\\n[1] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. 2024. MotionCtrl: A Unified and Flexible Motion Controller for Video Generation. In ACM SIGGRAPH 2024 Conference Papers (SIGGRAPH '24). Association for Computing Machinery, New York, NY, USA, Article 114, 1\\u201311. https://doi.org/10.1145/3641519.3657518\", \"questions\": \"1. In Fig 1, I am curious how to convert the text instruction \\\"Zoom-in\\\" to the motion field.\\n2. Eq (3) is unclear. Is P_{x,y} the Plucker embedding? But how to get R, K and t from the dataset? Does the dataset provide such information?\\n3. What is the major difference between this paper and the motionCtrl [1]? A detailed comparison of the proposed method with MotionCtrl, highlighting key differences in approach, architecture, and performance, would be helpful.\\n\\n[1] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. 2024. MotionCtrl: A Unified and Flexible Motion Controller for Video Generation. In ACM SIGGRAPH 2024 Conference Papers (SIGGRAPH '24). Association for Computing Machinery, New York, NY, USA, Article 114, 1\\u201311. https://doi.org/10.1145/3641519.3657518\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Unauthorized submission of manuscripts without the consent of co-authors.\"}",
"{\"summary\": \"This paper aims at controlling the camera viewpoints of the videos generated by the DiT-based video diffusion models. To achieve the precise camera viewpoint control, this paper utilizes the Pl\\u00fccker embedding as the camera representation. The Pl\\u00fccker embeddings are per-frame spatial maps, while the DiT-based diffusion (like OpenSora) do some downsamples in the temporal dimension. To deal with this conflict, this paper proposes a Sparse Motion Encoding Module to temporally downsample the Pl\\u00fccker embeddings, with the same ratio as the OpenSora VAE. This Sparse Motion Encoding Module is implemented by a MagVit2 like causal VAE. The generated latent motion is injected into the temporal attention layer of OpenSora using an adaptive normalization layer. Experiments demonstrate the superiority of proposed method on both short and long video generation (with camera control) task. Some ablation studies also prove the effectiveness of the proposed Sparse Motion Encoding Module and Temporal Attention Injection Module.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper train a VAE model to temporally downsample the Pl\\u00fccker embeddings, easing the conflict of different temporal length between Pl\\u00fccker embedding and the latent features.\\n2. The visualization results demonstrate the effectiveness of the proposed method on some simple camera trajectories, like paning and zooming.\", \"weaknesses\": \"1. The motivation of Sparse Motion Encoding Module is not well presented. Why the VAE model is used to compress the Pl\\u00fccker embedding? The encoder of VAE can be used to compress the Pl\\u00fccker embedding, what is the decoder used for? Besides, generally, the VAE model will not bring too much extra computation, and it can be reused once encoded, why this module **sparsely** sample some Pl\\u00fccker motion vector?\\n2. The writing is not good for this manuscript. For example, there are some typos, like the **MegVit-v2** in Line 198, the inconsistency of OpenSora and Open-Sora. Besides, some details are missing, like what is the input of the Sparse Motion Encoding Module. The rows 2 and 4 in Figure 4 does not provide too much information.\\n3. The experiments is not very convincing. See Question part.\", \"questions\": \"Besides the questions in the first point of weakness section, I have the following questions.\\n1. The MagVit2 model is designed to be able to compress the images and videos in a single model. Using some padding, the first image is treated as a separate image, thus the training of MagVit2 VAE is 17 frames (line 17) is reasonable. But, in line 259, the author said \\\"we extract 16-frames...\\\", I want to know how those 16 frame are padded and what is the output of the VAE encoder?\\n2. Can the author provide the reconstruction results, using l1 loss, for the reconstructed sparse Pl\\u00fccker embedding?\\n3. Whether the motion degree of objects of the whole scene degraded after adding the camera control-related modules? \\n4. In line 303, the author state that they use the different resolution for different video generation models, can the FID, FVD fully reflect the ranking of different models?\\n5. In the visualization results, the camera trajectories seems too simple, focusing on panning and zooming. I remember there are some more complex camera trajectories in RealEstate10K dataset, can the author provide some quantitative or qualitative results on those complex camera trajectories?\\n6. How to calculate the CamMC metric for SVD, AnimateDiff, OpenSora model, since they cannot take the camera trajectories as input.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
15dVqf7VXR | Learning with User-Level Local Differential Privacy | [
"Puning Zhao",
"Li Shen",
"Jiafei Wu",
"Zhe Liu",
"Rongfei Fan",
"Qingming Li",
"Huiwen Wu"
] | User-level privacy is important in distributed systems. Previous research primarily focuses on the central model, while the local models have received much less attention. Under the central model, user-level DP is strictly stronger than the item-level one. However, under the local model, the relationship between user-level and item-level LDP becomes more complex, thus the analysis is crucially different. In this paper, we first analyze the mean estimation problem and then apply it to stochastic optimization, classification, and regression. In particular, we propose adaptive strategies to achieve optimal performance at all privacy levels. Moreover, we also obtain information-theoretic lower bounds, which show that the proposed methods are minimax optimal up to logarithmic factors. Unlike the central DP model, where user-level DP always leads to slower convergence, our result shows that under the local model, the convergence rates are nearly the same between user-level and item-level cases for distributions with bounded support. For heavy-tailed distributions, the user-level rate is even faster than the item-level one. | [
"Local differential privacy",
"minimax"
] | Reject | https://openreview.net/pdf?id=15dVqf7VXR | https://openreview.net/forum?id=15dVqf7VXR | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rjchSmhDox",
"iBGlLyytfY",
"c4r3ra4kzb",
"VmxdkmUxp9",
"MQh5Rxzjme",
"LnjnfB4rdS",
"Lfie1pTxIW",
"LBgIHyhVTZ",
"F4zC1X4922",
"BDSW4EEqOD",
"1gFQRcAGrZ"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1731981435670,
1732633567435,
1734196264552,
1731984458297,
1730698391754,
1730750140064,
1737523717299,
1732260910481,
1731980427033,
1732697865937,
1730671097308
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5638/Reviewer_Qvtb"
],
[
"ICLR.cc/2025/Conference/Submission5638/Area_Chair_jD1Z"
],
[
"ICLR.cc/2025/Conference/Submission5638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5638/Reviewer_Qvtb"
],
[
"ICLR.cc/2025/Conference/Submission5638/Reviewer_skTW"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5638/Reviewer_skTW"
],
[
"ICLR.cc/2025/Conference/Submission5638/Reviewer_mdCH"
]
],
"structured_content_str": [
"{\"title\": \"Response\", \"comment\": \"Thank you very much for your positive feedback. We are encouraged that you have positive evaluation on our significance, organization and completeness of our work.\\n\\nThe references [1-3] are about user-level DP under the **central model**. Although these works and ours do not have exactly the same background, we will definitely mention them in our revised paper.\\n\\nTo the best of our knowledge, the statement \\\"Moreover, we also provide the first analysis on nonparametric classification and regression problems under user-level $\\\\epsilon$-LDP\\\" is still accurate. [4] studies linear regression instead of nonparametric classification and regression. Since nonparametric statistics do not impose any model assumptions on the distribution, the techniques are crucially different. [5] and [6] are about item-level LDP. However, following your comment, we will discuss them in the related work section.\"}",
"{\"comment\": \"Thank you for the response and I have no other comments.\"}",
"{\"metareview\": \"## Summary of Contributions\\n\\nThis paper studies learning tasks in user-level local DP (LDP) setting. Here each of the $n$ users has $m$ items drawn from an unknown distribution, and we want to satisfy LDP where the privacy unit is each user (i.e. all $m$ items can change). The authors study this model for the problem of mean estimation, stochastic optimization, classification and regression. They prove nearly tight (minimax) bounds for these problem for all ranges of $\\\\epsilon$ values.\\n\\n## Strengths\\n\\n- The problems and the user-level DP settings are natural and important.\\n- The bounds obtained here are nearly tight.\\n\\n## Weaknesses\\n\\n- It is unclear how novel this work is. The (Bassily & Sun, 2023) paper cited here already obtain tight bounds for small $\\\\epsilon$ values, which is arguably the most important regime of parameters. Similarly, the techniques in this paper are fairly similar to that paper (i.e. dividing user into groups, rotation for $\\\\ell_2$ vectors) and it seems like the only main differences are some sort of parameter tuning / rearranging of different steps. (Note that the novelties are not clearly highlighted in the paper.)\\n\\n## Recommendation\\n\\nGiven the weakness, we believe that the paper's contributions are below the bar for ICLR and we recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"There are some clarification with regards to novelty and relation to previous work during discussion. However, it remains true that the algorithms are very minor tweak of previous algorithms and thus the novelty remains unclear.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your careful reading of this paper and valuable comments. We reply to your comments as follows.\\n\\n# Reply to Question 1\\n\\nIn industrial applications, from an accuracy-first perspective, companies usually consider only the case with $\\\\epsilon >1$. Moreover, a recently trend is to use message shuffling techniques. In these cases, we usually consider relatively large $\\\\epsilon$. For example:\\n\\nK.Talwar et al. Samplable Anonymous Aggregation for Private Federated Data Analysis, CCS, 2024. (apple team)\\n\\nExposure Notification Privacy-Preserving. \\\"Exposure Notification Privacy-Preserving Analytics (ENPA) White Paper.\\\" ENPA_White_Paper. pdf.\\n\\n# Reply to Question 2\\n\\nWe mean that, for arbitrary $m$, if we randomly pick a sample from a user, $n$ users with $m$ samples per user under user-level $\\\\epsilon$-LDP is just equivalent to item-level $\\\\epsilon$-LDP with $n$ samples. The result is not limited to the case with $m=1$.\\n\\n# Reply to Question 3\\n\\nFor user-level $\\\\epsilon$-LDP, $m$ local samples combine together to generate an output, thus **local samples within the same user can share information with each other.** Therefore user-level $\\\\epsilon$-LDP does not ensure item-level $\\\\epsilon$-LDP. \\n\\n# Reply to Question 4\\n\\nWe explain it from information-theoretic perspective. A precise communication of a continuous random variable requires infinite number of bits. However, in user-level LDP, each user compresses all local samples into only one variable, which can only transmit limited information. As a result, as long as $n$ is fixed, no matter how large is $m$, the precision can not reach infinity. We refer to line 251-252 in our paper, which gives a lower bound. We also refer to line 969-986 for a proof.\\n\\n# Reply to Question 5\\n\\nIn line 322, our paper has stated that \\\"denote $n_0$ as the number of users in each group\\\". Now there are $n$ users divided into $\\\\lceil d/\\\\epsilon \\\\rceil$ groups, thus $n_0=n/\\\\lceil d/\\\\epsilon\\\\rceil$.\\n\\n# Reply to Question 6\\n\\nThanks for this comment. We study the bounded gradient case because we want to compare with the item-level case (Duchi et al. .Local privacy and statistical minimax rates. FOCS 2013). We will add the analysis for unbounded gradients in our revised paper.\"}",
"{\"summary\": \"This paper examines user-level privacy in a distributed setting, particularly in user-level local differential privacy (ULDP). The authors analyze mean estimation and its applications in stochastic optimization, classification, and regression, proposing adaptive strategies that optimize performance across various privacy levels. The authors claim that unlike in the central model, the convergence rates for user-level and item-level privacy are nearly equivalent in local models, with user-level privacy yielding even faster rates for heavy-tailed distributions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is very organized and presents its results in a clear manner.\\n\\n2. Matching information-theoretic lower bounds are also derived which enhances the completeness of this work.\", \"weaknesses\": \"This paper studies ULDP on various problem settings: mean estimation, stochastic optimization, classification and regression. It is clear from Table 1 how the proposed rates in ULDP is different from the rates in item-level LDP. However, some relevant papers appear to be missing from the references. For example, [1], [2] and [3]\\n\\n[1]: Li, Bo, Wei Wang, and Peng Ye. \\\"Improved Bounds for Pure Private Agnostic Learning: Item-Level and User-Level Privacy.\\\" arXiv preprint arXiv:2407.20640 (2024).\\n\\n[2]: Cummings, Rachel, et al. \\\"Mean estimation with user-level privacy under data heterogeneity.\\\" Advances in Neural Information Processing Systems 35 (2022): 29139-29151.\\n\\n[3]: Charles, Zachary, et al. \\\"Fine-tuning large language models with user-level differential privacy.\\\" arXiv preprint arXiv:2407.07737 (2024).\\n\\n\\nBesides, on line 132 and 133\\\" Moreover, we also provide the first analysis on nonparametric classification and regression problems under user-level \\u03f5-LDP\\\" is not accurate. To the best of my knowledge, [4] also studies regression in the ULDP setting under sparsity constraint. From my perspective, sparse estimation problem in LDP model ([5], [6]) might also could also be a valuable addition to the related work section.\\n\\n[4]: Ma, Yuheng, Ke Jia, and Hanfang Yang. \\\"Better Locally Private Sparse Estimation Given Multiple Samples Per User.\\\" arXiv preprint arXiv:2408.04313 (2024).\\n\\n[5]: Zhu, Liyang, et al. \\\"Improved Analysis of Sparse Linear Regression in Local Differential Privacy Model.\\\" arXiv preprint arXiv:2310.07367 (2023). \\n\\n[6]: Zhou, Mingxun, et al. \\\"Locally differentially private sparse vector aggregation.\\\" 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022.\", \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper first analyzes the mean estimation problem and then extends the findings to stochastic optimization, classification, and regression. Specifically, the authors propose adaptive strategies to achieve optimal performance across all privacy levels. They also derive information-theoretic lower bounds, demonstrating that the proposed methods are minimax optimal up to logarithmic factors. Notably, unlike the central DP model, where user-level DP generally leads to slower convergence, the results show that, under the local DP model, convergence rates are nearly identical between user-level and item-level cases for distributions with bounded support. For heavy-tailed distributions, the user-level rate is even faster than the item-level rate.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper tackles significant learning problems under user-level local differential privacy (LDP) constraints and establishes several tight lower and upper bounds.\", \"weaknesses\": \"Some statements throughout the paper are somewhat unclear, which can make parts of the presentation difficult to follow.\\n\\nFor the stochastic optimization problem, only the bounded gradient case and strongly convex objective functions are considered, which may not be sufficiently practical for broader applications.\", \"questions\": \"1. Why is the case of $\\\\epsilon > 1$ considered interesting for LDP studies?\\n\\n2. In Proposition 1 (2), to ensure user-level $\\\\epsilon$-LDP from item-level $\\\\epsilon$-LDP, if we randomly pick a sample from each user, why is it stated as ''$n$ users with $m$ samples per user'' instead of ''$n$ users with $1$ sample per user''?\\n\\n3. For Definition 1, could you explain in detail why the definition of user-level $\\\\epsilon$-LDP does not ensure item-level $\\\\epsilon$-LDP?\\n\\n4. For Theorem~1, I am unable to understand why is it said that the mean squared error will never converge to zero with increasing $m$ if $n$ is fixed.\\n\\n5. What does $n_0$ represent in Equation (6)?\\n\\n6. For the stochastic optimization problem, why is only the bounded gradient case considered? Why can't the private mean estimation over unbounded support developed in the paper be used for the unbounded gradient case, which seems more interesting and important in practice?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Global Response\", \"comment\": \"We thank the reviewers for your time in reviewing this paper, as well as your valuable comments. We are encouraged that reviewers agree that our paper is complete and well organized (Reviewer Qvtb) tackle a significant problem (Reviewer sKTW). There are also some detailed comments that help us to improve the paper. Following these comments, we have revised the paper and the revisions are marked with **blue** color.\\n\\nWe respond to your comments below. Hope that these responses together with the revised manuscript can address your concerns. We are also looking forward to the reevaluation of our paper.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your comment. Now we respond to your questions.\\n\\n# Reply to Question 1\\n\\nIf $\\\\epsilon > 1$, then we just use eq.(65), which is the standard minimax lower bound that holds even for non-private data. We do not need to derive eq.(64) from eq.(62) again. eq.(64) only holds for $\\\\epsilon < 1$. Therefore our proof is correct here. \\n\\nFollowing your questions, in our revised paper, we have emphasized this point to make it clearer. The new statements are:\\n\\nWith $\\\\epsilon<1$, let $s\\\\sim 1/\\\\sqrt{nm\\\\epsilon^2}$, then $\\\\text{P}(\\\\hat{V}\\\\neq V)\\\\sim 1$. Hence\\n\\n```\\n\\\\inf_{\\\\hat{\\\\mu}} \\\\underset{Q\\\\in \\\\mathcal{Q}_\\\\epsilon}{\\\\inf} \\\\underset{p\\\\in \\\\mathcal{P}_\\\\mathcal{X}}{\\\\sup}\\\\mathbb{E}[(\\\\hat{\\\\mu}-\\\\mu)^2]\\\\gtrsim \\\\frac{D^2}{nm\\\\epsilon^2}\\n```\\n\\nIf $\\\\epsilon>1$, then from standard minimax analysis for non-private problems, \\\\textcolor{blue}{the estimation error can not be smaller than $\\\\sigma^2/(mn)$, with $\\\\sigma^2$ being the sample variance. The maximum value of $\\\\sigma^2$ is $D^2$. Therefore} it can be easily shown that\\n\\n```\\n\\\\underset{\\\\hat{\\\\mu}}{\\\\inf} \\\\underset{Q\\\\in \\\\mathcal{Q}_\\\\epsilon}{\\\\inf}\\\\underset{p\\\\in \\\\mathcal{P}_\\\\mathcal{X}}{\\\\sup}\\\\mathbb{E}[(\\\\hat{\\\\mu}-\\\\mu)^2]\\\\gtrsim \\\\frac{D^2}{nm}.\\n```\\n\\n# Reply to Question 2\\n\\nThanks for this suggestion. To derive the lower bound, we need to bound the total variation (TV) distance, which is shown in Lemma 10 in Appendix G.\\n\\n# Reply to Question 3\\n\\nCurrent model is actually **interactive**. For example, in the mean estimation problem, we use a two-stage approach, in which the second stage depends on the first stage.\\n\\n# About the novelty of this paper\\n\\nTo the best of our knowledge, our work is the first attempt to study user-level LDP problems for **general $\\\\epsilon$**. Moreover, it is also the first attempt to study nonparametric classification and regression problem with user-level LDP.\\n\\nFor user-level LDP problems, different privacy budget $\\\\epsilon$ requires different methods. We take the multi-dimensional mean estimation problem as an example. We conduct **user splitting** (divide users into groups, and each group is responsible for one component) for small $\\\\epsilon$, and **budget splitting** (let the budget to be $\\\\epsilon / d$ for each component) for very large $\\\\epsilon$. For medium $\\\\epsilon$, we design a new grouping strategy to achieve a smooth transition between these two extremes.\\n\\nWe are looking forward to your further comments and questions.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your detailed responses to my questions. While the clarifications provided are helpful and address several points, some concerns regarding the practicality of the bounded gradient assumption and the presentation's clarity remain.\\n\\nI maintain my score, as the paper is marginal in its present form. I look forward to seeing the revised manuscript and the planned extensions.\"}",
"{\"summary\": \"This paper addresses the problem of achieving user-level local differential privacy (LDP) across various statistical tasks, including mean estimation, stochastic optimization, classification, and regression. By tailoring privacy mechanisms to different privacy levels, the authors propose algorithms that attain optimal performance rates under user-level LDP, achieving minimax optimality up to logarithmic factors. Unlike the central model, where user-level privacy often implies slower convergence, the local model yields convergence rates comparable to item-level LDP, with even faster rates in heavy-tailed distributions. This work provides both theoretical bounds and adaptive strategies, expanding the scope of user-level LDP applications in distributed systems.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"It addressed user-level DP, which is a relatively less explored but extremely relevant area\", \"It studied a wide variety of tasks (mean estimation, stochastic optimization, nonparametric classification and regression)\"], \"weaknesses\": [\"Technical novelty is unclear\", \"Some proof is unclear\"], \"questions\": \"1. The authors highlight the regime where \\\\eps > 1 in the introduction. Yet, it is unclear how this regime is handled in the proof of the lower bound. For example, in the proof of Theorem 2, how do we get from Eq. 62 to Eq. 64, if \\\\eps > 1? I understand that the current proof holds for \\\\eps < 1. Similar questions exist in the proof of Theorems 6 and 7.\\n\\n2. Following the above, it would be useful to highlight the difference in the lower bound proof for item-level and user-level LDP, especially the regime when \\\\eps >1. \\n\\n3. It seems to me that the current local model is **non-interactive**? Can the authors comment on the **interactive** model? That is, can the proposed algorithms be easily extended to the interactive model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
15UetYngA7 | FuseChat: Knowledge Fusion of Chat Models | [
"Fanqi Wan",
"Longguang Zhong",
"Ziyi Yang",
"Ruijun Chen",
"Xiaojun Quan"
] | While training large language models (LLMs) from scratch can indeed lead to models with distinct capabilities and strengths, it incurs substantial costs and may lead to redundancy in competencies. Knowledge fusion aims to integrate existing LLMs of diverse architectures and capabilities into a more potent LLM through lightweight continual training, thereby reducing the need for costly LLM development. In this work, we propose a new framework for the knowledge fusion of chat LLMs through two main stages, resulting in FuseChat. Firstly, we conduct pairwise knowledge fusion on source chat LLMs of varying structures and scales to create multiple target LLMs with identical structure and size via lightweight fine-tuning. During this process, a statistics-based token alignment approach is introduced as the cornerstone for fusing LLMs with different structures. Secondly, we merge these target LLMs within the parameter space, where we propose a novel method for determining the merging coefficients based on the magnitude of parameter updates before and after fine-tuning. We implement and validate FuseChat using six prominent chat LLMs with diverse architectures and scales, including OpenChat-3.5-7B, Starling-LM-7B-alpha, NH2-SOLAR-10.7B, InternLM2-Chat-20B, Mixtral-8x7B-Instruct, and Qwen-1.5-Chat-72B. Experimental results on two instruction-following benchmarks, AlpacaEval 2.0 and MT-Bench, demonstrate the superiority of FuseChat-7B over baselines of various sizes. Our model is even comparable to the larger Mixtral-8x7B-Instruct and approaches GPT-3.5-Turbo-1106 on MT-Bench. | [
"Model Fusion",
"Large Language Models"
] | https://openreview.net/pdf?id=15UetYngA7 | https://openreview.net/forum?id=15UetYngA7 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zJM4BlXjQc",
"xu2L3ZL1oY",
"vhGyc0p3ds",
"rBTz4reNLG",
"qk4MmX0sAX",
"kdhdVRKcLf",
"gbdjlFveo3",
"cxWUFU0QD3",
"ac6zDbhkep",
"HSYV8oW3so",
"GTzdnaW2CP",
"F3RqDH89QN",
"EyMxxQTrdC",
"D6HQNJo2hp",
"CHdDpR0g4z",
"BTCWzRuOZv",
"98fpJRyDyu",
"5gwoR2yifU"
],
"note_type": [
"official_comment",
"official_review",
"comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732084087864,
1730501291551,
1737593533651,
1730280944512,
1732501634355,
1732586357492,
1732332238858,
1732781115433,
1732083758988,
1732506651608,
1732084600739,
1732085257531,
1732084643639,
1733156011242,
1732507249047,
1729751578480,
1732085271074,
1732786028101
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Reviewer_qdGR"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Reviewer_CcuV"
],
[
"ICLR.cc/2025/Conference/Submission6352/Reviewer_CcuV"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Reviewer_qdGR"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Area_Chair_EmLf"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Reviewer_UpRG"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6352/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Official Comment by Authors: Part 1\", \"comment\": \"Thank you for reviewing our paper and providing insightful feedback. We're glad you find our work well-motivated and practical. We will address your concerns in the following points.\\n\\n> **Q1: Regarding the technical innovation and contribution of FuseChat.**\\n\\n**A1:** We thank the reviewer for the feedback regarding the technical contributions of our work. We would like to emphasize the uniqueness of FuseChat and clarify its distinctions from prior works such as FuseLLM and TIES from multiple perspectives:\\n\\n**1. Distinction from FuseLLM**\\n\\n**a. Motivation**\\nWhile FuseLLM emphasizes the fusion of multiple base LLMs through continual pre-training, FuseChat focuses on integrating diverse chat-oriented LLMs into a unified chat model via supervised fine-tuning. This difference in both training objectives and data makes FuseChat essential in the context of chat-focused LLMs. Moreover, our work extends beyond FuseLLM\\u2019s scope by fusing six distinct chat LLMs (as opposed to FuseLLM\\u2019s three base models), thereby demonstrating the scalability and depth of our methodology.\\n\\n**b. Methodology**\\nWhile FuseLLM directly employs multi-teacher distillation to fuse multiple base LLMs, FuseChat employs a sophisticated fuse-and-merge approach, beginning with pairwise fusion and advancing to our SCE merging strategy. This new method is not only highly scalable and efficient but also better resolves knowledge conflicts in the parameter space. Simultaneously, it integrates the strengths of each source LLM with precision. By adopting this refined approach, FuseChat noticeably enhances the final model\\u2019s performance, distinguishing it from the techniques employed by FuseLLM.\\n\\n**c. Scalability**\\nAnother key strength of FuseChat lies in its plug-and-play approach for integrating new LLMs, which is more efficient than FuseLLM. Instead of combining distribution matrices from all source LLMs, FuseChat merges the distribution matrices of the new source LLM with a pivot LLM. This streamlined process reduces computational and storage costs, ensuring superior scalability as the number of LLMs increases.\\n\\n**d. Experimental Validation**\\nOur experimental setup demonstrates the distinct focus of FuseChat. By fusing six varied chat LLMs (OpenChat-3.5-7B, Starling-LM-7B-alpha, NH2-SOLAR-10.7B, InternLM2-Chat-20B, Mixtral-8x7B-Instruct, and Qwen-1.5-Chat-72B), we validate FuseChat on AlpacaEval 2.0 and MT-Bench, assessing both instruction-following and conversational capabilities. This is in contrast to the base-model-focused experiments of FuseLLM and underscores the tailored contributions of FuseChat to the domain of chat LLM fusion.\\n\\n**2. Distinction from the TIES merging method**\", \"our_sce_merging_strategy_introduces_considerable_innovations_compared_to_the_ties_merging_method\": \"**a. Automation and Precision**\\nUnlike TIES, which relies on manually tuned, model-level coefficients, our SCE automates the merging process by leveraging weight updates from a pivot LLM and computing matrix-level coefficients. This enables the fine-grained incorporation of diverse benefits across LLMs, which is difficult to achieve with manual hyperparameter tuning.\\n\\n**b. Nuanced Parameter Adjustments**\\nIn our specific context, where target LLMs are trained on identical datasets with relatively subtle parameter variations, SCE excels at capturing and preserving the distinctive advantages of each LLM through nuanced matrix-level parameter updates.\\n\\n**c. Superior Performance**\\nExperimental results (e.g., Table 4) demonstrate that SCE outperforms baseline merging techniques including TIES within our framework, validating its efficacy and impact.\\n\\nWe will incorporate these detailed discussions into the revised manuscript to provide a clearer distinction of our work from previous approaches.\"}",
"{\"summary\": \"This paper proposes a new framework, FuseChat, to fuse diverse LLMs into a single LLM capable of performing various tasks. They first apply pairwise knowledge fusion on source chat LLMs to create multiple target LLMs with identical structures. To fuse models with different vocabulary, they introduce a statistics-based token alignment approach to obtain probabilistic distribution matrices. Then, they fuse all the target LLMs within the parameter space by utilizing the proposed new merging method, SCE. In their experiments, they conducted extensive experiments to investigate their framework with diverse source LLMs and evaluation datasets. They also offered a number of model analyses and ablation studies.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper studies an interesting question of how to fuse multiple chat LLMs into a potent chat LLM. The paper is well-written and well-organized.\\n2. The paper has extensive experiments to investigate the effectiveness of their proposed framework and each component in their framework.\\n3. Their fusion method is also computation-friendly, which doesn't require additional training or dataset.\", \"weaknesses\": \"1. They didn't provide a significance test to show if their proposed method significantly outperforms their baselines (e.g. FuseLLM/OpenChat-3.5-7B Multi) or not. Because the improvement in some tasks is small, it would be better to show whether the improvement is significant.\\n2. Table 1's caption needs to be improved. It would be helpful if they clarified what bold font and underscore mean in their table and what the percentage means.\", \"questions\": \"1. For Figure 3, I wonder if pivot LLM is the original OpenChat-3.5-7B or OpenChat-3.5-7B after fusing training. I also wonder if Target LLM is the OpenChat-3.5-7B after fusing training or the final FuseChat model. Please clarify these.\\n2. I wonder if you could categorize the samples in your training data into domains by MT-Bench and see how the distribution is in your training set.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"The paper introduces FUSECHAT, a framework designed for the knowledge fusion of chat-based large language models (LLMs). The proposed fuse-and-merge framework integrates the strengths of diverse LLMs through lightweight continual training while avoiding the high cost and potential redundancy associated with developing new LLMs from scratch. Experimental results indicate that the proposed model outperforms existing methods across AlpacaEval and MT-Bench.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe motivation is practical and significant, offering a cost-effective solution for integrating capabilities of different heterogeneous LLMs without training new models from scratch.\\n\\n2. The two-stage framework effectively combines heterogeneous model knowledge through distillation into homogeneous models followed by parameter merging, with a well-designed token alignment strategy.\\n\\n3. Comprehensive experiments validate the framework's effectiveness, showing competitive performance against different methods.\", \"weaknesses\": \"1. The paper's technical contribution appears somewhat limited. The approach can be viewed as a combination of pairwise FuseLLM and model merging (similar to TIES-Merging), both of which have been previously established as effective methods. The improved performance, while notable, follows logically from the combination of these known techniques, making the technical innovation less impressive than desired.\\n2. Several claims in the paper require further clarification. For instance, the statement on line 92 of the Introduction that \\\"FUSELLM limits its exploration to source LLMs of the same size as the target LLM\\\" appears inconsistent with FUSELLM's design, which can handle different-sized source models. Furthermore, FUSECHAT doesn't present special designs for distilling from differently-sized source models. Additionally, the choice of MinCE for the Fusion function in Equation 2 reduces to single-model distillation of the model with lower CE score in each pair, raising questions about the necessity of the pairwise approach.\\n3. There are concerns regarding experimental details. The combination weight is 0.9 in Equation 4, which means only 0.1 weight is assigned to distillation loss. Compared to 0.9 for SFT, this setting potentially undermines the significance of the distillation process. Moreover, the modest performance difference between FUSECHAT and Pairwise Fusion shown in Table 1 warrants statistical significance testing to validate the improvements.\", \"questions\": \"1.\\tHave the authors considered individual model distillation instead of pairwise fusion, given the MinCE choice in Equation 2?\\n2.\\tWhat is the rationale behind the 0.9/0.1 weight distribution in Equation 4? \\n3.\\tCan the authors provide statistical significance tests for the improvements over Pairwise Fusion?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for responding to my concern.\", \"comment\": \"Thank you for responding to my concern, but I feel that there is still a slight lack of innovation. I plan to keep my score unchanged.\"}",
"{\"comment\": \"Dear Reviewer UpRG,\\n\\nWe sincerely appreciate the time and effort you have devoted to providing thoughtful reviews and valuable feedback. We have carefully addressed your concerns in detail and incorporated additional experiments and analyses, as summarized in the discussion:\\n\\n- Demonstrated the challenges of knowledge fusion and FuseChat.\\n- Detailed the token alignment and SCE merging methods.\\n- Explained the imbalanced performance achieved by FuseChat in different domains.\\n\\nWe hope these revisions and discussions have adequately addressed your concerns. As the Author-Reviewer discussion phase is ending soon, we would be grateful for any additional comments or questions that could further enhance our work. Thank you again for your time and effort.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Thanks for your response.\", \"comment\": \"I found that my concerns have been addressed. I have adjusted my rate accordingly.\"}",
"{\"title\": \"Major Revisions of the Manuscript\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate your thorough review and valuable feedback, which have greatly contributed to improving the quality of our manuscript. We have carefully addressed all comments and suggestions through comprehensive revisions. Below, we summarize the major changes made to the manuscript, with key updates highlighted in ${\\\\color{blue} blue}$ text in the PDF, along with other refinements to meet the page limit.\\n\\n> **Key Revisions**\\n\\n1. We have refined the discussion regarding FuseLLM's limitations in **Section 1 (Lines 92\\u201393)** for enhanced precision. (Reviewer **CcuV**)\\n\\n2. We have updated the caption for **Table 1** for improved clarity. (Reviewer **qdGR**) \\n\\n3. We have added an explanation regarding the model's performance differences across various domains in **Section 4.2 (Lines 432\\u2013434)**, indicating the performance is largely determined by the availability of domain-specific training data and the competency of the source LLMs in those domains. (Reviewers **qdGR, UpRG**)\\n \\n4. We have included the domain distribution of training data across different domains in **Appendix C (line 852-866), Table 6**. (Reviewer **qdGR**)\\n \\n5. We have added significance tests for the performance improvements of FuseChat over Pairwise Fusion and OpenChat-3.5-7B Multi in **Appendix F (line 929-942), Table 8**, demonstrating the strong statistical significance of the final FuseChat model's superiority over these baselines. (Reviewers **qdGR, CcuV**) \\n\\n6. We have added an analysis comparing Pairwise Fusion and Single-Model Distillation in **Appendix G (line 945-971), Table 9, Table 10**, , establishing the consistent superiority of pairwise fusion across five source LLMs and the enhanced effectiveness of merging pairwise fusion models versus single-model distillation models. (Reviewer **CcuV**) \\n\\n7. We have elaborated on the rationale behind the large weight of fusion loss in **Appendix H (line 972-986), Table 11**, explaining its relationship to the approximately threefold magnitude compared to the SFT loss. (Reviewer **CcuV**)\\n \\nWe are deeply grateful for your insightful feedback, which has been instrumental in strengthening our work. We hope these revisions and additional analyses thoroughly address all concerns. As the Author-Reviewer discussion phase nears its conclusion, we welcome any further suggestions that could help us improve the manuscript.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for reviewing our paper. We greatly value your insightful feedback and appreciation of our work's significance. Below, we address your concerns in detail.\\n\\n> **Q1: Regarding the significance test in our experiments.**\\n\\n**A1:** Thank you for raising this important point. To show the statistical significance of our results, we conducted a t-test on MT-Bench to compare the performance of our proposed FuseChat-7B with OpenChat-3.5-7B Multi, which fuses multiple source LLMs simultaneously. The results shown in the table below reveal a p-value well lower than 0.05. This confirms that **FuseChat-7B achieves statistically significant improvements over OpenChat-3.5-7B Multi**. These statistical results will be incorporated into the revised manuscript.\\n\\n| Model | t-statistic | p-value |\\n|:-----------:|:-----------:|:-------:|\\n| FuseChat-7B vs. OpenChat-3.5-7B Multi | 3.32756 | 0.00108 |\\n\\n> **Q2: Regarding the caption of Table 1.**\\n\\n**A2:** Thank you for your thoughtful suggestion. We will address this in the revised version by improving the caption for Table 1. Specifically, we will clarify that the bold font denotes the best performance among all the fused LLMs, while the underscore indicates the second-best performance. Moreover, the percentages represent the relative performance improvement compared to the OpenChat-3.5-7B SFT baseline model. We believe these clarifications enhance the table\\u2019s interpretability and precision.\\n\\n> **Q3: Regarding the details of pivot LLM in Figure 3.**\\n\\n**A3:** Thank you for your observation. As clarified in Section 3.1 (lines 179\\u2013190), the term pivot LLM refers to the original OpenChat-3.5-7B model prior to the application of pairwise fusion, and the term target LLM describes an intermediate model generated through pairwise fusion between the pivot LLM and an individual source LLM. Our approach first performs pairwise fusion between the pivot LLM and each source LLM independently, resulting in a series of corresponding target LLMs. These intermediate models are then combined using our SCE merging technique to create the final FuseChat model. We hope this explanation resolves any potential ambiguity.\\n\\n> **Q4: Regarding the domain distribution of samples in training data.**\\n\\n**A4:** We appreciate the reviewer\\u2019s concern regarding the domain distribution of samples in the training data. We followed the approach described in Magpie [1] and employed the Llama-3-8B-Instruct model to classify our 95,000 training examples into eight distinct domains as defined by MT-Bench. After excluding approximately 7,000 samples due to anomalous classification errors, the final domain distribution is summarized in the following table:\\n\\n| Statistics | Math | Extraction | Roleplay | Writing | STEM | Reasoning | Humanities | Coding | Total |\\n|:--------------:|:-----:|:----------:|:--------:|:-------:|:----:|:---------:|:----------:|:------:|:-----:|\\n| Num. Sample | 15079 | 20329 | 8137 | 7627 | 983 | 7948 | 1403 | 27119 | 88625 |\\n| Percentage (%) | 17.01 | 22.94 | 9.18 | 8.61 | 1.11 | 8.97 | 1.58 | 30.60 | 100 |\\n\\nThe resulting data distribution demonstrates substantial diversity, which aligns with our primary objective to assess the model's general capabilities rather than domain-specific performance. As shown in Figure 3 of the paper, pairwise fusion leads to a marked improvement in the target LLMs' math and coding abilities. This enhancement is primarily due to **the strong performance of the source LLMs in these domains, coupled with the relatively high proportion of Math and Coding samples in our dataset.** Interestingly, despite a notable representation of the Extraction domain in the dataset, the target LLMs show limited improvement in this area. This can be attributed to **the relatively weaker performance of the source LLMs in extraction tasks, highlighting the critical role of selecting appropriate source LLMs for domain-specific objectives.** We will integrate this detailed analysis in the revised manuscript to provide further clarity.\\n\\n[1] Xu et al. Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing, 2024.\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nI noticed that you haven't yet responded to the author's rebuttal. As November 26 is the last day for reviewers to ask questions to authors, could you please review their responses and provide your feedback?\\n\\nYour timely response will ensure a thorough evaluation process and help with making the final recommendation. Thank you for your prompt attention to this matter.\\n\\nArea Chair\"}",
"{\"title\": \"Official Comment by Authors: Part 2\", \"comment\": \"> **Q2: Regarding the difference between pairwise fusion and single-model distillation.**\\n\\n**A2:** Thank you for raising this insightful point. The key distinction between pairwise fusion and single-model distillation lies in their respective learning paradigms. **In pairwise fusion, the model selectively acquires knowledge based on the quality of outputs from the source LLM or the pivot LLM**, guided by lower cross-entropy (CE) values. This mechanism ensures that the model learns from the stronger performer in each sample. In contrast, **single-model distillation relies exclusively on the source LLM, implicitly assuming that the source LLM consistently provides the superior results**.\\n\\nTo address your comment more rigorously, we conducted additional experiments comparing the two approaches. Specifically, we replaced the pairwise fusion strategy in FuseChat with direct distillation from a single (source) model, skipping the merging phase for direct comparison. The results summarized in the table below demonstrate that **pairwise fusion consistently outperforms single-model distillation across five source LLMs**. For clarity, D/P represents the results from direct distillation and pairwise fusion, respectively. Metrics reported include the Average Score on MT-Bench and the Length-Controlled Win Rate on AlpacaEval-2.0.\\n\\n| Model | MT-Bench | AlpacaEval-2.0 |\\n|:------------------------------:|:---------:|:--------------:|\\n| OpenChat-3.5-7B Qwen (D/P) | 6.79/**7.23** | 5.98/**14.98** |\\n| OpenChat-3.5-7B Mixtral (D/P) | 7.03/**7.24** | 16.10/**16.52** |\\n| OpenChat-3.5-7B InternLM (D/P) | 6.88/**7.21** | 6.54/**15.21** |\\n| OpenChat-3.5-7B SOLAR (D/P) | 7.09/**7.17** | 12.21/**16.51** |\\n| OpenChat-3.5-7B Starling (D/P) | 7.15/**7.22** | 14.89/**16.20** |\\n\\nWe further applied our proposed SCE method to fuse the models obtained through single-model distillation. The results below reveal that **merging the models derived from pairwise fusion produces a superior fused model compared to merging models from single-model distillation**. \\n\\n| Model | MT-Bench | AlpacaEval-2.0 |\\n|:-----------------:|:---------:|:--------------:|\\n| FuseChat-7B (D/P) | 6.91/**7.38** | 14.68/**17.16** |\\n\\nThese results highlight the effectiveness of the pairwise fusion approach, not only in standalone performance but also in enhancing the quality of the final fused model. We appreciate your attention to this critical aspect and hope these findings provide additional clarity.\\n\\n> **Q3: Regarding the rationale behind the 0.9/0.1 weight distribution in Equation 4.**\\n\\n**A3:** We appreciate the reviewer\\u2019s thoughtful observation regarding the rationale behind the 0.9/0.1 weight distribution in Equation 4. This choice is informed by the significant difference in magnitude between the SFT loss and the fusion loss. To further clarify, we conducted a new experiment using Qwen-1.5-Chat-72B as the source LLM and 128 instances randomly sampled from the training dataset. The resulting loss values for SFT and fusion are summarized in the table below.\\n\\n| Loss Type | Loss Value |\\n|:---------:|:------:|\\n| SFT | 0.5077 |\\n| Fusion | 1.3081 |\\n\\nThe results show that **the fusion loss is approximately three times larger than the SFT loss** in this experiment. This notable disparity underscores the importance of assigning a proportionally smaller weight to the fusion loss in Equation 4. Without this adjustment, an excessively high weight for the fusion loss could amplify the imbalance, potentially skewing the training process. Thus, the 0.9/0.1 distribution reflects a principled approach to mitigating this effect and achieving a balanced optimization.\"}",
"{\"title\": \"Official Comment by Authors: Part 1\", \"comment\": \"Thank you for reviewing our paper and providing insightful feedback. We\\u2019re glad you find our work logical and convincing and will address your concerns point by point.\\n\\n> **Q1: Regarding the challenges of knowledge fusion and FuseChat.**\\n\\n**A1:** Thank you for your insightful feedback. We appreciate the opportunity to clarify and expand upon this point. In the revised manuscript, we will emphasize the challenges and complexities faced by FuseChat from two key perspectives. First, unlike direct supervised fine-tuning, knowledge fusion introduces additional training costs, such as the computational overhead of inferencing results from the source LLMs. This added step significantly increases resource and time requirements. Second, knowledge fusion methods like FuseChat encounter inherent challenges, including vocabulary alignment across different LLMs and the merging of their distribution matrices. These processes are non-trivial and can introduce noise and errors, which in turn may impact the quality of the fusion results. By addressing these aspects, we aim to provide a more comprehensive discussion of the intricacies involved in developing FuseChat.\\n\\n> **Q2: Regarding the details and formulas for our token alignment method.**\\n\\n**A2:** Token alignment is designed to address the mapping challenges between probabilistic distribution matrices generated by different source LLMs for a given instruction. This alignment occurs along two critical dimensions: sequence and probability distribution.\\nFor sequence alignment, we employ dynamic programming to effectively map the tokenized sequences from the source LLM to those of the pivot LLM. For distribution alignment, we propose to utilize mapping statistics (MS) from the sequence dimension as the criteria for alignment in the distribution dimension. To enhance clarity, we have included a visual illustration of our token alignment method in Figure 7 (Appendix A). We will further refine this explanation by introducing explicit mathematical formulations in the revised revision.\\n\\n> **Q3: Regarding the motivation and details of the SCE algorithm.**\\n\\n**A3:** The motivation of our SCE is to design a simple merging strategy to **automatically identify and incorporate the learned advantages from diverse target LLMs while simultaneously resolving knowledge conflicts in the parameter space, without the need for additional parameter tuning**. \\n\\nTo achieve this, we utilize weight updates from the pivot LLM to various target LLMs during the model fusion process, employing these updates as fusion vectors that reflect diverse advantages from different models. The weight merging for each parameter matrix unit in the target LLMs is carried out through a three-step procedure:\\n\\n1. Fusion vectors for each unit parameter matrix, derived from various target LLMs, are intended to capture distinctive and significant strengths of these models. To emphasize the most impactful features, we select the top \\u03c4% of elements from each parameter matrix-level fusion vector. This selection is based on the elements exhibiting the highest variances across multiple target LLMs, as these variances are indicative of the most significant differences and strengths among the models.\\n2. We then compute a matrix-level merging coefficient for each target LLM based on the sum of squares of elements in their respective selected fusion vectors.\\n3. To mitigate knowledge interference across different target LLMs, we implement a conflict resolution mechanism. This entails eliminating elements with minority \\tdirections when the signs of weight updates are in opposition.\\n\\nWe acknowledge that the SCE merging method involves technical complexity. Due to space constraints in the initial submission, we were unable to elaborate further on its specifics. In the revised version, we will provide a more detailed and comprehensive explanation to ensure clarity and address this complexity effectively.\"}",
"{\"title\": \"Official Comment by Authors: Part 3\", \"comment\": \"> **Q4: Regarding the statistical significance of performance improvements.**\\n\\n**A4:** We appreciate the reviewer\\u2019s concern regarding the statistical significance of the performance improvements. To address this, we conducted a detailed statistical analysis using t-tests to evaluate the performance of the final FuseChat model against pairwise fusion on MT-Bench. Moreover, we performed a similar analysis comparing the final FuseChat model with OpenChat-3.5-7B Multi, which integrates multiple source LLMs simultaneously, as FuseLLM does. The results summarized in the following table demonstrate **the strong statistical significance of the final FuseChat model's superiority over these baselines**. These results will be included in the revised paper to enhance clarity and provide robust support for our claims.\\n\\n| Model | t-statistic | p-value |\\n|:-------------------------------------:|:-----------:|:-------:|\\n| FuseChat-7B vs. Pairwise Fusion | 2.95874 | 0.00318 |\\n| FuseChat-7B vs. OpenChat-3.5-7B Multi | 3.32756 | 0.00108 |\\n\\n> **Q5: Regarding claims that require further clarification.**\\n\\n**A5:** We sincerely thank the reviewer for highlighting this important point. Regarding the claim about the limitations of FuseLLM, we wish to clarify that while FuseLLM's experiments were constrained to three source LLMs of an equivalent 7B scale, our work broadens the scope by incorporating six source LLMs with varying scales, ranging from 7B to 72B. We will ensure that these claims are conveyed more clearly in the revised manuscript.\"}",
"{\"comment\": \"Dear Reviewer UpRG,\\n\\nThank you for your time and detailed feedback on our manuscript. **As the reviewer-author discussion period ends today (December 2nd at 11:59 pm AoE)**, we would like to check if we have adequately addressed all your concerns.\\n\\nYour insightful comments and questions have been instrumental in improving our work. **We have carefully incorporated your feedback into the revised manuscript and hope that our responses and updates have successfully addressed all the points you raised.**\\n\\nWe understand you have a busy schedule, but if you have any remaining questions or need further clarification, please let us know, and we will address them promptly. **If you feel that we have satisfactorily addressed your concerns, we would greatly appreciate it if you could consider updating your initial score.**\\n\\nThank you again for your valuable time and constructive feedback that has helped enhance the quality of our work.\\n\\nBest regards,\\n\\nThe Authors of Paper 6352\"}",
"{\"comment\": \"Thank you for carefully considering our response to your concerns. We appreciate your feedback and are pleased to hear that our revisions addressed many of the points you raised. We fully respect your judgment on the novelty of our work. However, we hope you can understand that, given the generally favorable ratings from the other two reviewers (8 and 6), your score of 5 may significantly influence the likelihood of our paper's acceptance.\\n\\nSince we seem to have common ground on the motivation behind our work, the proposed two-stage framework, the token alignment strategy, as well as the experimental results, we would kindly ask if you might reconsider raising your score from 5 to 6. This adjustment would greatly enhance the possibility of our work being presented to a broader audience at ICLR.\\n\\nThank you for your time and thoughtful consideration.\"}",
"{\"summary\": \"This paper introduce a fuse-and-merge framework called FUSECHAT, which includes two stages. Pairwise knowledge fusion using a pivot LLM and token alignment to generate target LLMs with identical structure and size, and merging these models via SCE method, which determines merging coefficients based on parameter updates.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"In general, the logic of the article is good, and the abstract, main text, and conclusions are consistent. The experiments are sufficiently convincing. The author summarizes the previous work from multiple aspects in the related work section.\", \"weaknesses\": \"1. In the Introduction section, there is insufficient explanation of the challenges faced by FUSECHAT. It is not enough to just explain the advantages of knowledge fusion, but the complexity of the work should also be highlighted.\\n2. The contribution of the work done in this paper is not explained in the Introduction section. \\n3. The method section uses too many narrative words and lacks specific formula expressions, which increases the difficulty for readers to understand the article. \\n4. In the experiment section, there is a lack of explanation for the adverse results in the experiment.\", \"questions\": \"1. First, the challenges of knowledge fusion tasks and the contributions of this paper should be introduced in the Introduction section.\\n2. The Method section should highlight the work done by the author. Extensive introduction of work that is not their own will make the article appear less innovative, and you can add formulas to further explain Token Alignment. \\n3. The introduction of the SCE algorithm is too short, and the reasons for the use of some steps are not introduced, such as the Calculate and Erase steps.\\n4. Added explanations for poor experimental results in the experimental section, for example, Target LLM performs worse than Pivot LLM and Source LLM in some dimensions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Authors: Part 2\", \"comment\": \"> **Q4: Regarding the imbalanced performance achieved by FuseChat in different domains.**\\n\\n**A4:** We appreciate the reviewer\\u2019s concern regarding the imbalanced performance in different domains. The performance of FuseChat across different domains is largely determined by two key factors: **the availability of domain-specific training data and the competency of the source LLMs in those domains**. (The domain distribution of our training data is summarized in the table below.) As shown in Figure 3 of the paper, the pairwise fusion process significantly enhances performance in domains like Math and Coding, where the source LLMs are particularly strong and ample domain-relevant training data is available. In contrast, performance is comparatively lower in domains where the source LLMs are less proficient (e.g., Extraction) or where domain-specific training data is sparse (e.g., STEM). This detailed analysis will be included in the revised version.\\n\\n| Statistics | Math | Extraction | Roleplay | Writing | STEM | Reasoning | Humanities | Coding | Total |\\n|:--------------:|:-----:|:----------:|:--------:|:-------:|:----:|:---------:|:----------:|:------:|:-----:|\\n| Num.Sample | 15079 | 20329 | 8137 | 7627 | 983 | 7948 | 1403 | 27119 | 88625 |\\n| Percentage (%) | 17.01 | 22.94 | 9.18 | 8.61 | 1.11 | 8.97 | 1.58 | 30.60 | 100 |\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for your positive feedback and for the constructive comments that are pivotal to improve our work.\"}"
]
} |
|
15ASUbzg0N | AVID: Adapting Video Diffusion Models to World Models | [
"Marc Rigter",
"Tarun Gupta",
"Agrin Hilmkil",
"Chao Ma"
] | Large-scale generative models have achieved remarkable success in a number of domains. However, for sequential decision-making problems, such as robotics, action-labelled data is often scarce and therefore scaling-up foundation models for decision-making remains a challenge. A potential solution lies in leveraging widely-available unlabelled videos to train world models that simulate the consequences of actions. If the world model is accurate, it can be used to optimize decision-making in downstream tasks. Image-to-video diffusion models are already capable of generating highly realistic synthetic videos. However, these models are not action-conditioned, and the most powerful models are closed source which means they cannot be finetuned. In this work, we propose to adapt pretrained video diffusion models to action-conditioned world models, without access to the parameters of the pretrained model. Our approach, AVID, trains an adapter on a small domain-specific dataset of action-labelled videos. AVID uses a learnt mask to modify the intermediate outputs of the pretrained model and generate accurate action-conditioned videos. We evaluate AVID on video game and real-world robotics data, and show that it outperforms existing baselines for diffusion model adaptation. Our results demonstrate that if utilized correctly, pretrained video models have the potential to be powerful tools for embodied AI. | [
"world models",
"video diffusion",
"black box adaptation",
"controllable video generation"
] | Reject | https://openreview.net/pdf?id=15ASUbzg0N | https://openreview.net/forum?id=15ASUbzg0N | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"szVzYNHBDg",
"fXqbWZG7Bp",
"dEIETK1jWe",
"XuNZIJlH1c",
"XKrpvPC1gX",
"TxReJiv12O",
"TbDILPwVos",
"TAJxZxuoxW",
"SXT6NsJB3X",
"MGFsc2iMM9",
"JMkkdwv2Gm",
"FVB63xr3Fp",
"BpNKEWGExV",
"AZSM4oomhH",
"8jF8tM8Fvq",
"5uXdKLuUlA"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732568508226,
1732475541098,
1730598472325,
1730242118386,
1734664000334,
1732659599205,
1732475427605,
1730656911483,
1732501385016,
1732475971555,
1732475912827,
1737523836631,
1732475628498,
1730460313777,
1732497105005,
1732646090058
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_NTHp"
],
[
"ICLR.cc/2025/Conference/Submission7403/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_CD2R"
],
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_Q4ad"
],
[
"ICLR.cc/2025/Conference/Submission7403/Area_Chair_hmwt"
],
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_CD2R"
],
[
"ICLR.cc/2025/Conference/Submission7403/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_NTHp"
],
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_Fgus"
],
[
"ICLR.cc/2025/Conference/Submission7403/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7403/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7403/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_Fgus"
],
[
"ICLR.cc/2025/Conference/Submission7403/Reviewer_Q4ad"
],
[
"ICLR.cc/2025/Conference/Submission7403/Area_Chair_hmwt"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for adding a qualitative comparison. Unfortunately I'm not convinced that AVID would show higher performance improvement for downstream applications. If there was evidence for AVID being suitable for adding any other modality of conditioning to video diffusion models, that would have been another reason to accept the paper. And given that there is also no analysis on standard deviation of experiments, I would like to maintain my previous rating.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the valuable time you have spent reviewing our paper.\\n\\n**Question 1**\\n\\n[1] argues for their proposed PoE method because it enables a diffusion model to be adapted without access to the weights of the denoising model. However, per Section 3.2 we show that this does not correctly optimize for the denoising objective.\\n\\nOur approach resolves this issue by making the simple insight that under the same assumptions (access to pretrained model outputs only), given the pretrained model we can train an adapter to directly optimize the denoising objective (via Equation 6), thus overcoming the issue of composing independently trained models in Section 3.2.\\n\\n**Question 2**\\n\\nFor \\u201cNo Mask\\u201d the adapter is not able to output a mask during both training and during inference.\\n\\n**Question 3**\\n\\nUnfortunately, due to changes in institution affiliations, authors no longer have access to compute or model checkpoints so are unable to create this visualization. We agree that this would be an interesting visualization, and regret that we are unable to do this.\\n\\n**Question 4**\\n\\nYes, the inference time is increased to approximately 1.5x the original pretrained model (it is less than 2x because the adapter model is smaller). We have added a short comment on this to the limitations section.\\n\\n**Question 5**\\n\\nYes, there is nothing specific about approach that makes it applicable only to action-conditioning for world modelling. Our approach could in general be used to add a new conditioning signal to a pretrained model. We have added a comment on this for future work in the conclusion: \\u201cWe also wish to explore using AVID adapters to add new conditioning signals to pretrained models other than actions.\\u201d\"}",
"{\"summary\": \"This paper focuses on the problem setting of action-conditioned video generation. It proposes to adapt pre-trained video generation model to action labeled dataset without access to parameters from pre-trained models. The goal is to add action information as conditioning to pre-trained models for more accurate video predictions. Authors also analyze limitations in previous related work, production of expert, under a specific case. The proposed approach, AVID, trains an adapter in smaller size on action labeled video datasets. It takes noise predictions outputs from pretrained models and action information as input, and learns to output a mask that is used to combine outputs from pre-trained model and adapter. The adapter is trained with reconstruction loss between final output from both models and ground truth on domain-specific datasets. Authors conducted experiments on two robotics datasets, Procgen and RT1, and compared proposed approach to several baselines that have full access and do not assume access to pretrained model parameters. Experiments results demonstrate that AVID outperforms baselines in generating more realistic videos and better quality given action information on these domains.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well written and easy to follow\\n2. The main idea of training a lightweight adapter for action-labeled domains is reasonable. It balances finetuning efficiency and task performance.\\n3. Baseline comparisons are comprehensive. Authors compared to many alternative baselines to demonstrate effectiveness of their approach. Authors provide qualitative visualizations for quality of generated videos and usefulness of learned masks.\", \"weaknesses\": \"1. The presentation in Section 3.2 is a little unclear. It is hard to connect analysis about limitations of previous work [1] to motivations of the proposed approach\\n2. The novelty is somewhat limited. The main difference from previous work is to have domain-specific adapter output an element-wise mask that is used to combine noise predictions from pre-trained model and adapter.\\n3. The experimental domains are only two datasets within action-conditioned world modeling\\n\\n[1] Yang, Mengjiao, et al. \\\"Probabilistic adaptation of text-to-video models.\\\" arXiv preprint arXiv:2306.01872 (2023).\", \"questions\": \"1. Regarding W1, can authors elaborate more on how and why this analysis motivates design choices in methodology of AVID?\\n2. In the ablation study of \\u201cNo mask\\u201d (Table 3), is the adapter trained with $\\\\epsilon_{\\\\text{final}}$ given in Equation 5 or the adapter not being able to output a mask?\\n3. Since the mask is an important component in design choices of AVID, could author visualize what the mask looks like in different timesteps of diffusion denoising process, which corresponds to Figure 4d?\\n4. Since AVID performs two diffusion denoising process, does this increase inference time and thus limit the scope of downstream applications of synthetic videos generated from this approach?\\n5. Regarding W3, is it technically possible to apply this approach to domains other than world modeling?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work proposes a method for leveraging diffusion models pretrained on large-scale action-free video data for training action-conditioned diffusion world models on smaller action-labeled data from domains of interest. The motivation for training these world models is to solve downstream sequential decision-making tasks. The proposed method\\u2019s main novelty is that it requires access to some intermediate calculations of the pretrained diffusion model but not to its parameters. The proposed method, AVID, trains an adapter network which is conditioned on the pretrained model\\u2019s noise prediction and optimizes a denoising loss that incorporates both noise predictions from the pretrained model and the adapter\\u2019s output using a learned mask. The author\\u2019s evaluate world model performance on a real-robot and a video game domain based on multiple perceptual metrics as well as an action-prediction-based metric. Baselines include various diffusion-based methods some of which require full access to model parameters. The proposed method either outperforms or is comparable to baselines in most of the evaluated metrics while not requiring access to pretrained model parameters.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Summarized points:\", \"Tackle an interesting problem in the path for scaling robot learning\", \"Clear and well written paper\", \"Good positioning in related work\", \"Comparison to relevant baselines\", \"Thorough analysis of results\", \"Detailed appendix\"], \"weaknesses\": \"Summarized points:\\n- Intermediate model calculations are not necessarily more likely to be accessible than the model parameters\\n- Limitation analysis of previous work (Section 3.2) does not clearly motivate the author\\u2019s specific choice of solution\\n- Main motivation is sequential decision-making but evaluation metrics do not assess the world models\\u2019 efficacy in solving such tasks\\n- It is not clear from the experimental results that training from scratch is not preferable to the proposed method for downstream sequential decision-making\\n\\n**Evaluation - Metrics**\\n\\nThe main motivation of your method is to accommodate sequential decision-making but evaluation metrics do not assess the world models\\u2019 efficacy in policy learning or planning.\\nAll metrics excluding \\u2018Action Error Ratio\\u2019 are perceptual metrics that may be dominated by aspects of the videos that are not important for control. For this reason, I believe the most interesting and relevant metric out of the ones you display in your evaluation is the \\u2018Action Error Ratio\\u2019. Your evaluation could benefit from including additional metrics that are a better proxy for the world model\\u2019s usefulness in sequential decision-making. In the Procgen dataset for example, you may want to measure the ability to predict the reward from the generated frames as well as the actions.\\n\\nI understand that evaluating the world models by actually using them to solve a sequential decision-making task may not be straightforward. Doing this for the RT1 dataset would be hard for multiple reasons, but it may be more feasible for the Procgen environments. One possible evaluation pipeline is training a separate model to predict the reward from a given frame and then use the cross-entropy method (CEM) or a similar sampling-based planning algorithm with model predictive control (MPC) on top of the world model to maximize the sum of rewards in the prediction horizon. Any decision-making algorithm you choose doesn\\u2019t have to be SOTA to demonstrate the point that a given world model is better than the other for this purpose.\\n\\nWhat is the accuracy of the action predictor on each dataset? I believe this is important in order to validate the use of the \\u2018Action Error Ratio\\u2019 metric and that this information should at least be in the appendix.\\n\\n**Evaluation - Baselines**\\n\\nWhy do you tune baseline hyperparameters based on FVD and not based on e.g. normalized evaluation metrics? I find this choice puzzling since you explicitly write in the results section that this metric is less suitable than others in the setting of action-conditioned video generation.\\n\\nHow do you choose which baselines out of the 8 you suggested appear in the result tables?\\n\\nCan the authors please explain what is the purpose of the \\u2018Full\\u2019 row in the result tables?\\n\\n**Evaluation - Results**\\n\\nIt is not clear from the experimental results that training from scratch is not preferable to the proposed method for downstream sequential decision-making, a point that is also suggested in the limitations section and is mostly based on the \\u2018Action Error Ratio\\u2019 metric. This is not to say that it clearly is not beneficial. I suggest adding a discussion about the differences in performance in the two domains which would incorporate further insights as to when and why training an adapter is preferable to training from scratch.\\n\\n**Evaluation - Ablation Study**\\n\\n*Mask ablation*: It is not clear from your results that the learned mask has performance benefits and can\\u2019t be \\u2018absorbed\\u2019 into the adapter noise prediction, especially since it hurts performance on one dataset and doesn\\u2019t in the other.\\nHow do you explain the difference in the effects of the mask on performance in each dataset? I think a discussion with respect to factors like the relationship between pre-training and fine-tuning data in each dataset and with respect to the results presented in Figure 4d could shed more light on this matter.\\n\\n*Conditioning ablation*: I think the method and/or ablation section can benefit from an explanation or intuition behind why conditioning on the pretrained model\\u2019s output is beneficial, given that the pretrained output is already accounted for in the objective.\\n\\n*Request for ablation*: As I see it, the fundamental difference between the proposed method and the PoE baseline is that the parameters of the adapter network are trained on the denoising loss containing noise predictions from both the pretrained network and the adapter network. Therefore an interesting ablation would be combining both the NM and NC ablations.\", \"questions\": \"Most questions and suggestions are detailed in the 'Weaknesses' section.\\n\\n**Limitations of Naive Adaptation of Yang et al.**\\n\\nCan the authors please highlight the exact source of discrepancy between the derivation in Yang et al. to the derivation presented in this section? Do you claim that there is an error in their derivation? Alternatively, are there different assumptions in your setting where their derivation does not hold?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This is a well-written paper about training an action-conditioning adapter for a pre-trained, frozen video diffusion model.\", \"the_reviewer_reception_of_this_paper_was_mixed\": \"while the writing and presentation are clear and of high quality, there were concerns about the generality of the method and the extent of the experimental evaluation to fully justify the claims made in the paper. For example, it would have been valuable to demonstrate the proposed adapter not just for action conditioning, but for other forms of controlling video diffusion (or even image diffusion) models.\\n\\nThe AC agrees with these concerns and recommends refining the evaluation of the method should the authors wish to submit a revised version of the paper to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"No reviewer was willing to champion the paper for acceptance.\"}",
"{\"comment\": \"I would like to thank authors for answering my questions. This addresses most of my concerns, especially providing more detailed discussions and comparisons to PoE [1]. My remaining suggestions to this work are extending to different downstream tasks other than action conditioning in world models and improving efficiency in inference time due to the use of additional adapter. Changed to weak accept.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the valuable time you have spent reviewing our paper.\\n\\n**Motivation in Introduction**\\n\\nYou are correct that our approach requires access to intermediate model outputs which are not available in current closed source models. We have modified the introduction in lines 70-71 to make it clear that our approach is not directly applicable to current closed source model: \\u201cWe advocate for providers of closed-source video models to provide access to intermediate model outputs in their APIs to facilitate the use of adaptation approaches such as AVID.\\u201d\\n\\n**Performance for Downstream Tasks**\\n\\nThe improvement in framewise-prediction errors demonstrates that AVID results in more accurate video generation, and we would expect that this translates to more accurate decision-making for downstream tasks. We aim to apply AVID to a real decision-making task in future work.\\n\\n**Qualitative Comparison to PoE**\\n\\nWe have added a qualitative comparison to PoE in Figure 7 in appendix A.5. The videos from PoE are much more blurry, and one of them looks like two different videos superimposed. We have added the following comment to the qualitative results section (Line 320):\\n\\n\\u201cThe videos generated by PoE are blurry, and sometimes appear like two superimposed videos.\\u201d\"}",
"{\"summary\": \"The paper proposes a mechanism for adapting current image-conditioned video diffusion models to action-conditioned video diffusion models. They do this by training an additional UNet after the standard video diffusion UNet which predicts an adjustment to the noise output by the standard UNet. Because of this setup, their \\\"adapter\\\" does not need access to the parameters of the pretrained video diffusion model. Experiments show that this kind of noise adaptation helps for some metrics and does not for some others.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper has a nice motivation: how does one adapt existing foundation models in order to add an action conditioning to them, so as to make it more relevant and useful for embodied robotics applications\", \"The paper writing is clear; first the limitations of prior work are built up and then a solution is proposed\"], \"weaknesses\": [\"The way the paper starts with the motivation near L51-52 is a bit misleading. The paper actually cannot fix the issues in L51-52 because they still assume access to the internal inference pipeline of these closed-source model, because if I understand it correctly, this method needs access to a diffusion model's noise prediction at each of the N reverse diffusion steps that happens at inference. For closed source models, this information is not available.\", \"The performance gain in the quantitative metrics is not substantial. The metrics where the proposed method shines are mostly photometric quantities. It is not clear if the error margin between prior work and this work just results from the standard deviation or variance of the models. I think a better reflection of the proposed approach would have come from an application to a downstream robot task (maybe manipulation) that would evaluate a robot in action. PSNR and other photometric errors with the shown gain do not say much about the performance of the method.\", \"The paper heavily leans on the limitations of the POE approach and that is how a learnable adapter is motivated but qualitatively there is no comparison to that approach (even though POE is slightly better than the action conditioned diffusion across some metrics and settings).\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I thank the authors for their reply and for addressing some of my concerns.\\n\\nQuestion 2\\n\\nI do not think that the DiT paper can serve as conclusive evidence that cross-attention based action conditioning in this paper's case would result in poorer performance. The DiT paper conditions on time step and class label not a more fine grained signal as a sequence of actions. My suggestion for trying a cross-attention based conditioning method is inspired by text-to-image/video diffusion methods where a sequence of tokens are used to condition the result in a fine-grained manner using cross-attention.\\nUnfortunately it seems the authors will not be able to test this out given their issues with compute.\\n\\nQuestion 3 \\nIt is unfortunate that the authors are unable to run new experiments during the rebuttal.\\n\\n\\nGiven the circumstances, I will maintain my previous rating of this paper.\"}",
"{\"title\": \"Thank you\", \"comment\": \"We thank the reviewers for their insightful reviews, and have updated our paper accordingly. We will respond to each review separately.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the valuable time you have spent reviewing our paper.\\n\\n**Evaluation - Metrics**\\n\\nThank you for the suggestion to evaluate the world models for sequential decision-making using MPC. We agree that the experiment you proposed makes sense and would be very insightful. However, given the short time frame of the rebuttal period we have had to defer this to future work.\", \"we_have_added_information_about_the_accuracy_of_the_action_predictors_on_each_dataset_to_appendix_b3\": \"\", \"coinrun\": \"\\u201cThe action classifier achieves an accuracy of 0.267 on real videos from a held-out test-set. The accuracy score is low because not all of the 15 actions result in different outcomes in all states in Coinrun. Therefore it is not possible to predict actions at near 100\\\\% accuracy as the action taken is often ambiguous.\\u201d\", \"rt1\": \"\\u201cThe action error predictor achieves an MSE of 0.110 on a held-out test set of normalized actions.\\u201d\\n\\n**Evaluation - Baselines**\\n\\nWe agree that tuning the baseline hyperparameters based on the normalized evaluation metrics is a good idea. We originally used FVD as the tuning metric as it is a highly common video metric. Unfortunately, due to a change in institution the authors have been unable to obtain access to the raw results and so cannot reprocess the results using a different tuning criterion.\\n\\nWe include all of the 8 baselines we suggested in Table 2 for the RT1 results. For the Table 1 Coinrun results, we do not include \\u201cLanguage-Conditioned Finetuning\\u201d as language conditioning is not relevant for that domain. Therefore, there are 7 baselines included in the Table 1 results.\\n\\nThe \\u201cFull\\u201d row indicates models that do not fit within our Small/Medium/Large model groupings because we have not scaled down the trainable parameter counts to reduce model capacity. This includes:\\n\\n- The pretrained model\\n- Full-finetuning methods\\n- ControlNet for RT1 since this is much larger than the small/medium/large models.\\n- Classifier guidance - for the simpler task of classifying actions, the classifier has a large number of trainable parameters and we therefore do not expect it to be limited by model capacity.\\n\\n**Evaluation - Ablation Study**\", \"mask_ablation\": \"We cannot say for sure why the mask helps on one domain but not the other. However, a hypothesis is that the movements in Coinrun are discrete and the images are low resolution. Therefore, it is easy for the adapter model to correct incorrect motions output by the base model with or without the mask. In RT1, the images are higher resolution and the movements are more subtle and continuous. Therefore, the mask may make it easier for the model to learn the more challenging task of correcting the motion from the pretrained model in this domain.\", \"conditioning_ablation\": \"We have added the following discussion to the ablations section. \\u201cOutput conditioning enables AVID to observe errors in the pretrained output and then immediately make corrections. In contrast, NC has slower feedback to correct the pretrained model output as discussed in Zavadski et al. (2023).\\u201d\\n\\nZavadski, Denis, Johann-Friedrich Feiden, and Carsten Rother. \\\"Controlnet-xs: Designing an efficient and effective architecture for controlling text-to-image diffusion models.\\\"\\u00a0*arXiv preprint arXiv:2312.06573*\\u00a0(2023).\", \"combining_nm_and_nc_ablations\": \"Thank you for this suggestion. Unfortunately, due to a change in institution the authors no longer have access to compute so we are unable to run this experiment.\\n\\n**Limitations of Yang et al.**\\n\\nSince our submission, we were made aware of another paper which discusses this issue (Du et al. 2023). Du et al. 2023 also point out that the composition is incorrect. However, as t\\u21920 the product model can be approximated by the sum of scores with error\\u21920. Thus, Langevin dynamics with infinite steps and an annealed step size yields the correct distribution. However, the distribution is incorrect for the practical finite-step generation used in diffusion models.\\n\\nWe have added this reference to the discussion in Section 3.2.\\n\\nDu, Yilun, et al. \\\"Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and MCMC.\\\"\\u00a0*International conference on machine learning*. PMLR, 2023.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the valuable time you have spent reviewing our paper.\\n\\n**Question 1**\\n\\nOur results show that adapting a pretrained model results in much more visually accurate video predictions. However, as you point out, the Action Error Ratio is slightly worse on the RT1 domain (although it is better on procgen). This is likely because the adapter has to correct inaccurate motions generated by the pretrained model, which in some domains may be more difficult than learning to generate the correct action-conditioned motion from scratch.\\n\\nIn future work, we wish to explore which approach leads to the strongest performance on downstream decision-making tasks.\\n\\n**Question 2**\\n\\nWe used the scale and shift conditioning as it has been shown to outperform cross-attention conditioning for diffusion transformers [1]. There is no reason that cross-attention couldn\\u2019t be used for the conditioning as an alternative, but we did not explore this.\\n\\n[1] Peebles, William, and Saining Xie. \\\"Scalable diffusion models with transformers.\\\"\\u00a0*Proceedings of the IEEE/CVF International Conference on Computer Vision*. 2023.\\n\\n**Question 3**\\n\\nThank you for this interesting suggestion. Unfortunately, due to changes in institution affiliations, the authors no longer have access to compute and model checkpoints so we are unable to regenerate the results with different task groupings. \\n\\n**Question 4**\\n\\nVideo visualizations are provided in the powerpoint slides provided in the supplementary material. Please let us know if you would like us to provide the videos in an alternative format.\\n\\n**Question 5**\\n\\nWe could not determine how the actions were normalized/preprocessed in the data provided by the IRASim authors. Therefore, we decided to omit the Action Error result, since the IRASim comparison is an auxiliary result that is not directly comparable to AVID.\"}",
"{\"summary\": \"1. The authors propose a novel method to condition pre-trained video diffusion models on action sequences without access to the pre-trained model's parameters.\\n2. The authors demonstrate that their adaptation method is superior to the method proposed in \\\"Probabilistic Adaptation of Text-to-Video Models\\\" and mathematically highlight the limitations of this other approach.\\n3. The authors use different pre-trained base models and two different video domains, games and robotics to quantitatively evaluate their proposed method against the above adaptation approach and some other proposed baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The authors propose a novel method to condition pre-trained video diffusion models on action sequences without access to the pre-trained model's parameters.\\n2. The authors mathematically highlight the limitations of the adaptation method proposed in \\\"Probabilistic Adaptation of Text-to-Video Models\\\" and this other approach.\\n3. The authors demonstrate that their adaptation method has better action consistency compared to the other approach, using a new metric that they introduce. \\n4. The authors also propose multiple baselines to compare against their proposed method.\", \"weaknesses\": \"1. In Table 2, Action conditioned diffusion has a better Action Error Ratio compared to the proposed approach for all three (small, medium, large) variants. While the authors do note this as a limitation, this needs to be explained/investigated more. If it is better to just train an action conditioned diffusion model from scratch why should there be a need to adapt pre-trained models ?\\n\\n2. Instead of using the action embedding to just scale and shift the t-th frame feature, have the authors explored using cross-attention layers directly with the action embedding sequence similar to language conditioning ? Are there any specific challenges that prohibit such an approach ?\\n\\n3. It would be interesting to see results for each task type in RT-1 . Are there tasks that are much harder to model than others and what does that tell us about the approach ?\\n\\n4. Some video visualisations of the generated videos (especially for robotics) would also be very useful to judge the effectiveness of the approach. Are the videos temporally consistent visually ?\\n\\n5. Why is IRAsim's Action error ratio empty in Table 7 ? is it not possible to evaluate the Action Error Ratio of IRAsim ?\", \"questions\": \"see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I thank the authors for their reply and for addressing some of my questions and concerns.\\n\\nIn my opinion, it is a problem that the authors have been unable to obtain access to the results/resources they need to properly address the reviewers\\u2019 requests and concerns during the rebuttal period. It is very common for authors to run additional experiments and evaluation during this period and this limitation should have been resolved long before the actual rebuttal, assuming there are no extreme circumstances that prevented it.\", \"my_remaining_concerns_are\": [\"Downstream decision-making is not properly evaluated.\", \"It remains unclear if the author\\u2019s proposed method is preferable to training from scratch, nor is there a valuable discussion as to when and why this is the case.\", \"Baseline hyperparameter tuning based on FVD raises questions about the relevance of the results.\", \"The fact that there is ambiguity in action prediction given video frames in Coinrun, which results in a very low accuracy for the action classifier, weakens the use of the Action Error Ratio metric on this dataset. Given that AVID only performs better than training from scratch on this metric in Coinrun, this weakens the overall claim that AVID is indeed better than training from scratch.\", \"An ablation with NM and NC could provide more insight to the benefits of these components to your method.\"]}",
"{\"title\": \"[ACTION NEEDED] Respond to author rebuttal\", \"comment\": \"Dear Reviewer,\\n\\nNow that the authors have posted their rebuttal, please take a moment and check whether your concerns were addressed. At your earliest convenience, please post a response and update your review, at a minimum acknowledging that you have read your rebuttal.\\n\\nThank you,\\n--Your AC\"}"
]
} |
14fFV0chUS | TRACE: Temporal Grounding Video LLM via Causal Event Modeling | [
"Yongxin Guo",
"Jingyu Liu",
"Mingda Li",
"Qingbin Liu",
"Xi Chen",
"Xiaoying Tang"
] | Video Temporal Grounding (VTG) is a crucial capability for video understanding models and plays a vital role in downstream tasks such as video browsing and editing.
To effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks. However, current video LLM-based methods rely exclusively on natural language generation, lacking the ability to model the clear structure inherent in videos, which restricts their effectiveness in tackling VTG tasks. To address this issue, this paper first formally introduces causal event modeling framework, which represents video LLM outputs as sequences of events, and predict the current event using previous events, video inputs, and textural instructions. Each event consists of three components: timestamps, salient scores, and textual captions. We then propose a novel task-interleaved video LLM called TRACE to effectively implement the causal event modeling framework in practice.
The TRACE process visual frames, timestamps, salient scores, and text as distinct tasks, employing various encoders and decoding heads for each. Task tokens are arranged in an interleaved sequence according to the causal event modeling framework's formulation.
Extensive experiments on various VTG tasks and datasets demonstrate the superior performance of TRACE compared to state-of-the-art video LLMs. Our model and code are avaliable at \url{https://github.com/gyxxyg/TRACE}. | [
"video large language model",
"video temporal grounding"
] | Accept (Poster) | https://openreview.net/pdf?id=14fFV0chUS | https://openreview.net/forum?id=14fFV0chUS | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"x4l41VchRc",
"wX74VEdMHP",
"vowM589pqA",
"u52DgWmjMD",
"tAc2sCbTxE",
"pQNQNmYlK6",
"p9pYXDy0KC",
"lywNqqT4rb",
"jhakWpwD4g",
"iemlKcVBRw",
"i26nkbBWpf",
"eKhFr27xBi",
"d1D9NaLtOv",
"YDYoIn18Nb",
"Xn5uU63nyR",
"Sh1itr2J38",
"QDQZirlp3f",
"MFl1oith6Y",
"IepmOIBZmp",
"IDy8sQ9Ujz",
"G2f6FG6cjI",
"9SDVRtRbcE",
"7pa8GnAZ6L",
"7kimlLY14R",
"6HpRM2flAJ",
"4BH4HsKNEj",
"2yZjDtYgR7",
"21yF8Yk0By"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment"
],
"note_created": [
1731839542228,
1732251669756,
1731938796228,
1731840696537,
1732227316457,
1731839886798,
1731839436841,
1732243666534,
1731840157517,
1732012258247,
1731986277644,
1731853207400,
1730027077894,
1732012596382,
1729442837426,
1730650284511,
1731840321198,
1731984966327,
1729058781206,
1731856390587,
1731851185691,
1732322857629,
1737523540509,
1732322763579,
1732590947199,
1734336380229,
1732621531036,
1732619354938
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_MMw2"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_p7QK"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_p7QK"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_MMw2"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_KYwb"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_YJgy"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_YJgy"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_YJgy"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_YJgy"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Area_Chair_QWCP"
],
[
"ICLR.cc/2025/Conference/Submission2914/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2914/Reviewer_KYwb"
]
],
"structured_content_str": [
"{\"title\": \"Reply to Reviewer KYwb (2/2)\", \"comment\": \"> There are several grammatical and spelling errors throughout the manuscript, which impact readability and may detract from the paper\\u2019s clarity. For example: Line 22: \\\"processes\\\" should be corrected to \\\"process\\\". Line 44-45: The phrase \\\"...which,...\\\" should be rephrased, and \\\"lacks\\\" should be changed to \\\"which lack\\\".\\n> \\n\\nThank you for your detailed suggestions! We have corrected the typos in the revised paper, which are highlighted in blue font.\\n\\n> Refine Prompt Design Explanation\\n> \\n\\nThank you for the suggestion! We have provided the examples of QA pairs in the Appendix of our submission.\\n\\n> Explore Custom Scene Parsing Techniques\\n> \\n\\nThank you for the suggestion! In TRACE, since timestamps, scores, and captions are decoded using separate heads, the parsing process is relatively straightforward. This allows us to independently collect timestamps, scores, and text, and then directly use these collected results for evaluation.\\n\\n[1] VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs. Arxiv 2024.\\n\\n[2] Vtimellm: Empower llm to grasp video moments. CVPR 2024.\\n\\n[3] Timechat: A time-sensitive multimodal large language model for long video understanding. CVPR 2024.\\n\\n[4] Lita: Language instructed temporal-localization assistant. ECCV 2024.\\n\\n[5] Momentor: Advancing Video Large Language Model with Fine-Grained Temporal Reasoning. ICML 2024.\\n\\n[6] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding. Arxiv 2024.\\n\\n[7] End-to-End Dense Video Captioning with Parallel Decoding. ICCV 2021.\\n\\n[8] UniVTG: Towards Unified Video-Language Temporal Grounding. ICCV 2023.\"}",
"{\"comment\": \"I appreciate the author\\u2019s response and their efforts to provide additional comparisons and clarifications. While some of the points raised in my original review were partially addressed, there remain core concerns that were not fully resolved. These issues are also echoed by other reviewers (e.g., KYwb).\\n\\nOne key concern is that the main argument regarding causal event modeling is still weakly supported. It remains unclear why the authors chose to focus on causal event modeling as the primary approach for structuring video representations. Videos inherently comprise diverse components\\u2014such as objects, backgrounds, actions, and interactions\\u2014that extend beyond salient scores and timestamps. While I understand the authors\\u2019 intention to draw inspiration from causal language modeling, the analogy appears to lack a solid foundation. Unlike language, which is relatively homogeneous and well-suited to the next-token prediction paradigm, the relationship between language, salient scores, and timestamps is less evident. Additionally, the necessity of a dedicated time/score head is questionable. Why not directly integrate these aspects into the text token space for modeling?\\n\\nThe ablation studies on the slot-based compression are appreciated. However, the additional results mainly show that increasing the number of tokens improves performance. This neither demonstrates the advantages of the proposed approach over established techniques such as Q-Former or 2D Average Pooling nor suggests a number of 8/16 tokens per frame is enough for video modeling. While the results on MVBench and VideoMME are promising, they remain significantly behind the performance of popular models like LLaVA-Onevision or Qwen2-VL.\\n\\nAlthough I will maintain my score currently, I strongly encourage the authors to incorporate the suggested improvements into their revision. I thank the authors for their efforts and look forward to further refinements that could address these fundamental concerns.\"}",
"{\"title\": \"Revision for Detailed Discussion.\", \"comment\": [\"Dear Reviewer YJgy,\", \"Thank you very much for your thoughtful follow-up suggestions! We have revised our paper to include a detailed discussion comparing (1) causal language modeling (e.g., Video LLMs), (2) causal event modeling (TRACE), and (3) causality modeling/discovery models ([1-4, 6-9]). Below are the key takeaways, and we encourage you to refer to Appendix C for the full discussion:\", \"TRACE enhances causal language modeling by:\", \"Providing clear correlations inter- and intra-event triplets.\", \"Independently modeling timestamps, scores, and textual information.\", \"We have also expanded the discussion on related works, including:\", \"Building complete causal relationships to address video understanding problems ([1, 9]).\", \"Introducing benchmark datasets for complex causal reasoning ([2, 4, 6, 7]).\", \"Discovering causality in videos ([3, 8]).\", \"We evaluated TRACE on causality reasoning benchmark [4], where it outperformed open-source video LLMs and achieved performance comparable to GPT-4o on tasks such as event description, contextual reasoning, and episodic reasoning. Although we also attempted to evaluate TRACE on [2], the raw video data was not provided.\", \"In addition, we discussed potential future improvements to TRACE through integration with causality discovery models, including:\", \"Using outputs from causality discovery models as inputs for video LLMs.\", \"Leveraging causality discovery outputs to construct Chain-of-Thought examples.\", \"Applying causality discovery outputs to modify attention masks for visual inputs.\", \"We hope the revised discussions and results address your concerns. Please feel free to reach out if you have any additional questions. Thank you again for your time and valuable feedback!\"]}",
"{\"title\": \"Reply to Reviewer YJgy\", \"comment\": \"Thank you for your valuable suggestions. We have addressed the questions you raised below.\\n> **Autoregressive modeling.**\\n> \\n\\nThank you for the insightful suggestion! **After carefully examining the submission and the papers you listed, we acknowledge that the concept of \\\"event\\\" in our paper may differ from that in [1, 3], which could lead to some confusion.** To address this, we have revised the illustration and Eq. 1 of our paper to provide a clearer understanding of our approach.\\n\\n- **In summary, conditioned on prompts/instructions and all the video frame tokens, TRACE formats its responses using event triplets that consist of timestamps, scores, and text.** This structured approach aligns well with the inherent structure of videos and provides a comprehensive understanding of their content.\\n - In our work, we define an \\\"event\\\" as a triplet comprising timestamps, scores, and text. This triplet serves as a model output unit, providing textual descriptions (answers) along with the corresponding timestamps and matching scores.\\n - For dense video captioning tasks, the event triplets serve as descriptions that summarize the video contents (i.e., \\\"events\\\" in [1,3]).\\n - For moment retrieval tasks, the event triplets provide descriptions of specific moments along with their corresponding timestamps.\\n - For general video QA tasks, the event triplets may encompass the answer, accompanied by the relevant timestamps and matching scores. We believe that constructing these data is an important future work direction for TRACE.\\n - Importantly, each event\\u00a0$e_{k}$\\u00a0is dependent on previous events\\u00a0$e_{1:k-1}$\\u00a0and all the video visual contents\\u00a0$F$ (see Eq. 2). **By observing all the video visual inputs $F$ and previous generated events $e_{1:k-1}$, the model has sufficient information to generate the event\\u00a0$e_{k}$.**\\n - **In [1, 3], the events are considered as the fundamental units of video content, i.e., video clips.** The authors primarily focus on identifying the causality between these video clips. We believe that this approach is relatively orthogonal to ours. Moreover, the outputs generated by causality discovery models have the potential to be integrated as inputs of TRACE, which may further enhance its performance.\\n- **The design of \\\"casual event modeling\\\" makes it easy to leverage pretrained LLMs.** Since current LLMs are typically decoder-only models, \\\"casual event modeling\\\" aligns closely with their pretraining objectives, allowing it to benefit from (1) instruction-following capabilities for handling diverse tasks simultaneously in a zero-shot manner, and (2) performance-boosting techniques like KV-cache and flash-attention. *While \\\"casual event modeling\\\" could be adapted to \\\"masked event modeling\\\" by taking into account future events, doing so would likely disrupt the pretrained knowledge of LLMs and result in a loss of the benefits.*\\n\\n> **Inference speed.**\\n> \\n\\nThank you for the suggestion! We have provided the inference speed of TRACE (128 frames) compared to VTG-LLM (96 frames), which is a typical video LLM with a standard architecture. The results demonstrate that TRACE does not incur any additional inference cost.\\n\\n| Youcook2 | Frame Num | Evaluation Time | Throughout (seconds per token) | Memory (M) |\\n| --- | --- | --- | --- | --- |\\n| VTG-LLM | 96 | 2:01:28 | 0.0728 | 19063 |\\n| TRACE | 128 | 1:40:40 | 0.0446 | 24007 |\\n\\n> **LLM backbone.**\\n> \\n\\nThank you for the valuable comments. Due to time and resource constraints, we have not conducted experiments using the llama2. However, we have conducted experiments without causal event modeling, also using Mixtral-7B as the LLM backbone.\\n\\nAs shown in the \\\"w/o causal event modeling\\\" section of Table 4, following the previous SOTA [5], we (1) use slot-based compression, (2) add time tokens and score tokens to the text vocabulary, and (3) use natural language outputs instead of event-structured outputs. The training data, vision encoder, and LLMs remain the same as in the original TRACE. **The results show that TRACE significantly outperforms this ablation, even with fewer sampled frames.**\\n\\n[5] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding. Arxiv 2024.\"}",
"{\"comment\": \"Thanks for your efforts in addressing my concerns, and I hope you can add these experiments to the final version to make it more comprehensive for readers.\"}",
"{\"title\": \"Reply to Reviewer p7QK\", \"comment\": \"Thank you for your valuable suggestions. We have addressed the questions you raised below.\\n\\n> While the paper compares TRACE with other video LLMs, it presents limited comparison and may not adequately address how it stands against traditional non-generative and task-specific models.\\n> \\n\\nThank you for the detailed comment. We have provided TRACE\\u2019s results compared to traditional non-generative and task-specific models after fine-tuning in Table 5. The results show that:\\n\\n- *TRACE sets a new SOTA on the YouCook2 dataset without audio inputs,*\\u00a0outperforming existing SOTA by a large margin, and even surpassing Vid2Seq (with audio) on F1 score.\\n- *On the Charades-STA dataset, TRACE performs comparably to non-generative methods*\\u00a0and outperforms strong baselines such as VDI, Moment-DETR, and UnLoc-L.\\n- *TRACE significantly outperforms other video LLMs after fine-tuning,*\\u00a0highlighting the potential of video LLMs for VTG tasks.\\n\\nBeyond fine-tuned results, **TRACE\\u2019s strong zero-shot performance, surpassing existing methods and handling multiple VTG tasks simultaneously, offers significant value**\\u2014something traditional non-generative and task-specific models cannot achieve.\\n\\n> The extent to which TRACE can be applied to other types of video tasks beyond VTG is unclear. Its design may be highly specialized, which could limit its applicability across diverse video understanding tasks. Authors should present more results on other video-understanding tasks since the design seems generalizable by building such causal event relations\\n> \\n\\nThank you for the insightful suggestion! We have also recognized this limitation and conducted additional experiments before the rebuttal. **The results show that the TRACE architecture is still capable of handling general video understanding tasks and excel in VTG tasks:**\\n\\n- *Despite **NOT** being trained on extensive multi-task datasets,*\\u00a0TRACE is still highly effective in handling general video understanding tasks. For example, the TRACE outperform generalist video LLMs like VideoChat2, ShareGPT4Video, and ST-LLM on VideoMME benchmark.\\n- We train TRACE-uni by incorporating additional general video understanding data from a subset of LLaVA-Video-178k (specifically the perceptiontest and YouTube parts). **TRACE-uni shows both improved general video understanding and stronger VTG performance without additional VTG training data.** Notably,\\n - TRACE-uni performs on par with, or even outperforms, general video LLMs that use the same LLM backbone and vision encoder (VideoLlama2) using only about 2M training data.\\n - TRACE-uni surpasses TRACE in VTG performance across all three evaluation datasets.\\n \\n\\n| MVBench | Avg | AS | AP | AA | FA | UA | OE | OI | OS | MD | AL | ST | AC | MC | MA | SC | FP | CO | EN | ER | CI |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| VideoLLama2 | 54.6 | | | | | | | | | | | | | | | | | | | | |\\n| TRACE | 48.1 | 61.2 | 56.5 | 72.5 | 46.5 | 61.0 | 48.0 | 69.5 | 40.0 | 22.0 | 31.0 | 86.5 | 37.5 | 37.0 | 51.0 | 45.0 | 40.5 | 39.0 | 31.0 | 43.5 | 44.5 |\\n| TRACE-uni | 53.8 | 68.1 | 58.5 | 72.5 | 41.5 | 73.5 | 55.1 | 71.5 | 40.5 | 25.0 | 53.0 | 88.5 | 63.5 | 38.5 | 51.0 | 52.5 | 49.0 | 59.5 | 33.5 | 49.5 | 32.5 |\\n\\n| VideoMME (w/o subtitle) | Short | Midium | Long | Avg |\\n| --- | --- | --- | --- | --- |\\n| VideoLLama2 | | | | 46.6 |\\n| TRACE | 49.5 | 42.5 | 39.3 | 43.8 |\\n| TRACE-uni | 58.2 | 48.1 | 42.3 | 49.6 |\\n\\n| Youcook2 (Zero-Shot) | CIDER | METEOR | SODA_c | F1 |\\n| --- | --- | --- | --- | --- |\\n| TRACE | 8.1 | 2.8 | 2.2 | 22.4 |\\n| TRACE-uni | 8.6 | 2.9 | 2.3 | 22.4 |\\n\\n| Charades-STA (Zero-Shot) | 0.3 | 0.5 | 0.7 | mIOU |\\n| --- | --- | --- | --- | --- |\\n| TRACE | 58.6 | 40.3 | 19.4 | 38.7 |\\n| TRACE-uni | 63.7 | 43.7 | 21.0 | 41.5 |\\n\\n| QVHighlights (Zero-Shot) | mAP | Hit@1 |\\n| --- | --- | --- |\\n| TRACE | 26.8 | 42.7 |\\n| TRACE-uni | 27.5 | 43.9 |\\n\\n---\"}",
"{\"title\": \"Reply to Reviewer KYwb (1/2)\", \"comment\": \"Thank you for your valuable suggestions! We have addressed the concerns you raised below.\\n\\n> While causal event modeling is presented as a core contribution of this work, the related work section does not address any prior research on similar methodologies. It would be helpful to clarify whether comparable approaches have been explored in the field of video understanding, or if this approach is entirely novel within this domain. Providing this context could strengthen the argument for the method\\u2019s originality and situate it more clearly within existing research.\\n> \\n\\nThank you for pointing that out! To the best of our knowledge, **TRACE is the first video-LLM method to model the response of video LLMs as a series of event triplets (timestamps, scores, captions).** To illustrate:\\n\\n- *Existing video LLMs typically only using text tokens*\\u2014either directly representing times/scores by text [1, 2, 3], or adding new tokens to the text vocabulary [4, 5, 6]. However, these methods still generate output in human language, which limits their ability to effectively capture the underlying structure of the video. Furthermore, introducing new tokens into the vocabulary could potentially degrade the pretrained captioning capabilities of the LLMs [6].\\u00a0*TRACE addresses these challenges by directly modeling video LLM responses as a series of events, using separate encoders and decoders to handle timestamps, scores, and text independently.*\\n- *Some non-LLM methods use event-like structures, but they lack causal modeling and are difficult to adapt to video LLMs.*\\u00a0For example, PDVC [7] uses distinct heads to decode timestamps and text, while UniVTG [8] reformats VTG tasks and employs three separate heads. While these methods share some intuitive similarities with the design of TRACE, they are challenging to integrate with LLMs pretrained using causal language modeling. *In contrast, TRACE introduces causal event modeling, enabling easier utilization of pretrained LLMs' reasoning capabilities and knowledge.*\\n\\nOverall, we believe TRACE is a significant contribution to advancing the field of video LLMs, particularly in addressing the challenges of VTG tasks. Its novel approach could pave the way for future research in effectively integrating video and language models.\\n\\n> It is unclear whether compressing visual features to 8 tokens is sufficient for preserving critical information in complex video scenes. The paper does not provide an analysis or experimental results on the trade-off between the number of tokens and model performance, which would be valuable in understanding the potential impact of this compression choice.\\n> \\n\\nThank you for your valuable suggestion! We would like to provide the following clarification\\n\\n- *We compress the visual tokens to address efficiency and context length limitations.* Since TRACE samples 128 frames, without compression, the ViT would produce over 70K visual tokens. To handle this, we compress the visual tokens to 8 tokens per frame, resulting in a total of 1,792 visual tokens after incorporating the time tokens corresponding to each frame. This compression allows us to effectively handle VTG tasks within the 4K context length limit.\\n- *We choose slot-based compression for its lightweight architecture and high performance on VTG tasks.* Introduced by [6], slot-based compression uses only one-third of the parameters of a single cross-attention layer, while outperforming both cross-attention and sampling-based methods on VTG tasks.\\n- As per your recommendation, we have conducted ablation studies on the number of tokens per frame. **However, the training will take more than a week to complete. We will post the results here once the ablation training is finished.**\"}",
"{\"title\": \"Reply to Reviewer p7QK\", \"comment\": \"Dear Reviewer p7QK,\\n\\nThank you for raising the score! We have incorporated the new results in Appendix B.2 of the revised paper and corrected the illustration about training data as suggested by reviewer YJgy. We appreciate your time and efforts in reviewing our paper.\"}",
"{\"title\": \"Reply to Reviewer MMw2 (1/2)\", \"comment\": \"Thank you for your valuable suggestions. We have addressed the questions you raised below.\\n\\n> While the motivation for TRACE is clear, the use of multiple task-specific heads may limit the model\\u2019s generalization. A primary appeal of Video-LLMs lies in their ability to handle a variety of tasks without specific fine-tuning. TRACE\\u2019s focus on VTG may narrow its versatility, making it less effective for general video understanding tasks.\\n> \\n\\nThank you for the insightful suggestion! We have also recognized this limitation and conducted additional experiments before the rebuttal. **The results show that the TRACE architecture is still capable of handling general video understanding tasks and excel in VTG tasks.**\\n\\n- *Despite **NOT** being trained on extensive multi-task datasets,*\\u00a0TRACE is still highly effective in handling general video understanding tasks. For example, the TRACE outperform generalist video LLMs like VideoChat2, ShareGPT4Video, and ST-LLM on VideoMME benchmark.\\n- We train TRACE-uni by incorporating additional general video understanding data from a subset of LLaVA-Video-178k (specifically the perceptiontest and YouTube parts). **TRACE-uni shows both improved general video understanding and stronger VTG performance without additional VTG training data.** Notably,\\n - TRACE-uni performs on par with, or even outperforms, generalist video LLMs that use the same LLM backbone and vision encoder (VideoLlama2) using only about 2M training data.\\n - TRACE-uni surpasses TRACE in VTG performance across all three evaluation datasets.\\n\\n| MVBench | Avg | AS | AP | AA | FA | UA | OE | OI | OS | MD | AL | ST | AC | MC | MA | SC | FP | CO | EN | ER | CI |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| VideoLLama2 | 54.6 | | | | | | | | | | | | | | | | | | | | |\\n| TRACE | 48.1 | 61.2 | 56.5 | 72.5 | 46.5 | 61.0 | 48.0 | 69.5 | 40.0 | 22.0 | 31.0 | 86.5 | 37.5 | 37.0 | 51.0 | 45.0 | 40.5 | 39.0 | 31.0 | 43.5 | 44.5 |\\n| TRACE-uni | 53.8 | 68.1 | 58.5 | 72.5 | 41.5 | 73.5 | 55.1 | 71.5 | 40.5 | 25.0 | 53.0 | 88.5 | 63.5 | 38.5 | 51.0 | 52.5 | 49.0 | 59.5 | 33.5 | 49.5 | 32.5 |\\n\\n| VideoMME (w/o subtitle) | Short | Midium | Long | Avg |\\n| --- | --- | --- | --- | --- |\\n| VideoLLama2 | | | | 46.6 |\\n| TRACE | 49.5 | 42.5 | 39.3 | 43.8 |\\n| TRACE-uni | 58.2 | 48.1 | 42.3 | 49.6 |\\n\\n| Youcook2 (Zero-Shot) | CIDER | METEOR | SODA_c | F1 |\\n| --- | --- | --- | --- | --- |\\n| TRACE | 8.1 | 2.8 | 2.2 | 22.4 |\\n| TRACE-uni | 8.6 | 2.9 | 2.3 | 22.4 |\\n\\n| Charades-STA (Zero-Shot) | 0.3 | 0.5 | 0.7 | mIOU |\\n| --- | --- | --- | --- | --- |\\n| TRACE | 58.6 | 40.3 | 19.4 | 38.7 |\\n| TRACE-uni | 63.7 | 43.7 | 21.0 | 41.5 |\\n\\n| QVHighlights (Zero-Shot) | mAP | Hit@1 |\\n| --- | --- | --- |\\n| TRACE | 26.8 | 42.7 |\\n| TRACE-uni | 27.5 | 43.9 |\\n\\n---\\n\\n> In most cases, lightweight VTG-specific models with stronger performance could be more suitable for VTG scenarios.\\n> \\n\\nThank you for your insightful comments. We would like to clarify a few points and provide further context:\\n\\n- Video LLMs offer distinct advantages that traditional VTG-specific models cannot match. Specifically, they provide:\\n 1. *Zero-shot capability*, which allows the models to perform VTG tasks without the need for task-specific training.\\n 2. *The ability to handle multiple VTG tasks simultaneously*, offering a level of versatility that traditional models lack. \\n \\n *We believe these features are crucial for real-world applications, where scalability and adaptability are keys.* \\n \\n- While VTG-specific models generally exhibit strong performance on VTG tasks, there is an emerging research trend exploring how video LLMs can address VTG challenges [1, 2, 3, 4, 5]. *TRACE introduces a novel solution that significantly enhances the performance of video LLMs on VTG tasks, and we believe this contribution is valuable to the community.*\"}",
"{\"title\": \"Results on number of slots per frame\", \"comment\": \"Dear Reviewer KYwb,\\n\\nWe have finished the experiments that compressing each frame into 16 tokens. Due to time constraints, we adopted the same settings as in Table 3, using VTG-IT only in Stage 2 and sampling 64 frames. Our findings are as follows: Increasing the number of slots per token significantly enhances TRACE's performance. Therefore, if computational or efficiency constraints are not a concern, we recommend using a larger number of slots per frame.\\n\\n \\n\\n| Youcook2 | Frame Num | Slot Num per Frame | SODA_c | CIDEr | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| | 64 | 8 | 1.9 | 6.9 | 21.4 |\\n| | 64 | 16 | 2.1 | 7.3 | 22.1 |\\n\\n| Charades-STA | Frame Num | Slot Num per Frame | R@1$_{IOU=0.5}$ | R@1$_{IOU=0.7}$ |\\n| --- | --- | --- | --- | --- |\\n| | 64 | 8 | 37.0 | 17.0 |\\n| | 64 | 16 | 41.9 | 20.1 |\\n\\nWe hope our responses have sufficiently addressed your concerns. Please do not hesitate to reach out if you have any further questions. Thank you again for your time and effort in reviewing our paper.\"}",
"{\"title\": \"To Reviewer YJgy\", \"comment\": \"Dear Reviewer YJgy,\\n\\nThank you for raising your score! We will incorporate the discussion section in the final version of the paper. Additionally, we are continuing to refine the main sections to include key takeaways of the discussion. Your detailed review has been invaluable in helping us improve the quality of the paper.\"}",
"{\"title\": \"Further Reply to Reviewer YJgy\", \"comment\": [\"Thank you for your prompt response and insightful suggestions. We fully acknowledge the importance of modeling the complete causal relationship between events for a comprehensive understanding of video content, as you have pointed out. We would like to provide further clarification regarding our study.\", \"Firstly, while **modeling the complete relationship may pose a challenge for decoder-only LLMs, this is a general limitation, not specific to TRACE.** Given that most current video LLMs also rely on decoder-only architectures, we believe that TRACE's structure does not diminish its capacity for general VQA tasks compared to these approaches. We have previously reported this to reviewers p7QK and MMw2 during the rebuttal phase.\", \"Secondly, we agree with your observation that causality discovery is not orthogonal to VTG tasks. We believe that it is possible to design causality discovery models in an 'orthogonal' manner, which can provide a comprehensive understanding of video content. The outputs of these causality discovery models can then be used as supplementary inputs for TRACE, potentially overcoming some of the limitations associated with decoder-only LLMs. We have discussed these related works and future directions in the 'Conclusion and Future Works' section of revised paper.\", \"Lastly, despite TRACE's potential limitations in modeling the complete causal relationship, we firmly believe that it still provides significant benefits. By representing the response of video LLMs as a series of event triplets (timestamps, scores, and captions), TRACE excels in VTG tasks. Furthermore, we envision that TRACE can be extended to other video understanding tasks, such as VQA, by expanding its annotations in the future. Therefore, we believe that TRACE remains a valuable contribution to the field of video understanding.\", \"Thank you once again for your detailed review, which has greatly enhanced the clarity of our paper.\"]}",
"{\"summary\": \"The paper introduces a task-interleaved video LLM, TRACE, which incorporates a newly-designed causal event modeling framework for VTG task. The TRACE employs multiple encoders for different inputs, while the task tokens are arranged in an interleaved manner. TRACE demonstrates SOTA performance on various VTG datasets compared to previous video LLMs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The presentation and illustration are quite clear and easy to follow.\\n2. The motivation of causal event modeling is quite intuitive and the design is straightforward and yet effective.\\n3. The zero-shot performance is superior compared to previous video LLM methods.\", \"weaknesses\": \"1. While the paper compares TRACE with other video LLMs, it presents limited comparison and may not adequately address how it stands against traditional non-generative and task-specific models.\\n2. The extent to which TRACE can be applied to other types of video tasks beyond VTG is unclear. Its design may be highly specialized, which could limit its applicability across diverse video understanding tasks. Authors should present more results on other video-understanding tasks since the design seems generalizable by building such causal event relations.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Results on number of slots per frame\", \"comment\": \"Dear Reviewer MMw2,\\n\\nWe have finished the experiments that compressing each frame into 16 tokens. Due to time constraints, we adopted the same settings as in Table 3, using VTG-IT only in Stage 2 and sampling 64 frames. Our findings are as follows: Increasing the number of slots per token significantly enhances TRACE's performance. Therefore, if computational or efficiency constraints are not a concern, we recommend using a larger number of slots per frame.\\n\\n| Youcook2 | Frame Num | Slot Num per Frame | SODA_c | CIDEr | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| | 64 | 8 | 1.9 | 6.9 | 21.4 |\\n| | 64 | 16 | 2.1 | 7.3 | 22.1 |\\n\\n| Charades-STA | Frame Num | Slot Num per Frame | R@1$_{IOU=0.5}$ | R@1$_{IOU=0.7}$ |\\n| --- | --- | --- | --- | --- |\\n| | 64 | 8 | 37.0 | 17.0 |\\n| | 64 | 16 | 41.9 | 20.1 |\\n\\nAdditionally, we evaluated TRACE on the causality reasoning benchmark [6], as shown in Table 11 of the revised paper. TRACE outperformed open-source video LLMs and achieved performance comparable to GPT-4o on tasks such as event description, contextual reasoning, and episodic reasoning. This result demonstrates the potential of the TRACE architecture for handling complex reasoning tasks.\\n\\nWe hope our responses have sufficiently addressed your concerns. Please feel free to reach out if you have any further questions. Thank you again for your time and effort in reviewing our paper.\\n\\n[6] Towards Event-oriented Long Video Understanding. ArXiv 2024.\"}",
"{\"summary\": \"This paper addresses the task of Video Temporal Grounding (VTG) and introduces TRACE, a task-interleaved Video-LLM designed for enhanced VTG performance. The authors highlight limitations in current Video-LLMs, which rely solely on natural language generation without considering the inherent temporal structure of videos. To address this, they propose a novel causal event modeling framework, decomposing videos into sequences of events defined by timestamps, salient scores, and textual captions. Extensive experiments on datasets such as Charades-STA, QVHighlights, and YouCook2 demonstrate the superior zero-shot performance of TRACE compared to existing Video-LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) Video Temporal Grounding (VTG) is a crucial task, yet current Video-LLMs underperform in this area. Techniques aimed at improving temporal grounding for these models are highly valuable to advance the field.\\n2) The causal event modeling framework fits well with the next-token prediction paradigm of large language models (LLMs), offering an intuitive way to model video structures in sequential tasks.\\n3) TRACE demonstrates consistent performance improvements over prior Video-LLMs across three key VTG benchmarks (Charades-STA, QVHighlights, and YouCook2), underscoring its effectiveness.\", \"weaknesses\": \"1) While the motivation for TRACE is clear, the use of multiple task-specific heads may limit the model\\u2019s generalization. A primary appeal of Video-LLMs lies in their ability to handle a variety of tasks without specific fine-tuning. TRACE\\u2019s focus on VTG may narrow its versatility, making it less effective for general video understanding tasks. In most cases, lightweight VTG-specific models with stronger performance could be more suitable for VTG scenarios.\\n2) Some clarity is not clear. For example, the paper does not adequately explain slot-based compression, which is not a widely known technique. Moreover, compressing each frame to just 8 visual tokens might lead to significant information loss, raising concerns about the trade-off between efficiency and accuracy.\\n3) It is unclear whether the same set of number tokens is used for both timestamps and scores. If so, this could blend the two types of information, contradicting the authors' claim (lines 45\\u201346) that the model preserves the distinct structure of video events.\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes a new method for Video Temporal Grounding (VTG) tasks, named TRACE. TRACE uses a causal event modeling framework to represent videos as a sequence of events with timestamps, salient scores, and textual descriptions. The paper designs a task-interleaved video large language model to address the limitations of traditional video LLMs in handling the inherent structure of videos. The TRACE model utilizes different encoders and decoding heads to process visual frames, timestamps, and text inputs, enabling more effective event sequencing and causal modeling. Experimental results demonstrate that TRACE achieves state-of-the-art zero-shot performance on various VTG tasks and, after fine-tuning, can match the performance of traditional non-generative, task-specific models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes TRACE, a framework leveraging causal event modeling to generate structured video representations through large language models (LLMs). This approach addresses structural gaps in video data, making it valuable for multi-modal research and practical applications in video analysis.\\n \\n2. TRACE maximizes the potential of pre-trained LLMs by adopting causal event modeling, which decomposes video inputs into frames and aligns them with textual prompts. The temporal segmentation and alignment methods allow videos to be broken down into events with associated timestamps, salient scores, and captions. This granularity is crucial for precise video event parsing and presents a significant step forward in video understanding with LLMs.\\n\\n3.TRACE outperforms the existing Video-LLMs on three pivotal Video Temporal Grounding (VTG) benchmarks\\u2014Charades-STA, QV Highlights, and YouCook2\\u2014underscoring its efficacy and robustness in handling video temporal grounding tasks. This achievement underscores TRACE's ability to accurately capture and model the intricate temporal dynamics across a spectrum of video datasets.\", \"weaknesses\": \"1.\\tWhile causal event modeling is presented as a core contribution of this work, the related work section does not address any prior research on similar methodologies. It would be helpful to clarify whether comparable approaches have been explored in the field of video understanding, or if this approach is entirely novel within this domain. Providing this context could strengthen the argument for the method\\u2019s originality and situate it more clearly within existing research.\\n\\n2.\\tIt is unclear whether compressing visual features to 8 tokens is sufficient for preserving critical information in complex video scenes. The paper does not provide an analysis or experimental results on the trade-off between the number of tokens and model performance, which would be valuable in understanding the potential impact of this compression choice.\\n\\n3.\\tThere are several grammatical and spelling errors throughout the manuscript, which impact readability and may detract from the paper\\u2019s clarity. For example: Line 22: \\\"processes\\\" should be corrected to \\\"process\\\". Line 44-45: The phrase \\\"...which,...\\\" should be rephrased, and \\\"lacks\\\" should be changed to \\\"which lack\\\".\", \"questions\": \"No additional questions. Please see the \\\"Weaknesses\\\" section for areas needing clarification.\\n\\n### Recommendations for Improvement:\\n- **Refine Prompt Design Explanation:** Providing specific strategies or insights on prompt design tailored for VTG tasks would enhance the paper's originality and usefulness for future researchers.\\n \\n- **Explore Custom Scene Parsing Techniques:** Introducing refined parsing methods could strengthen TRACE's robustness and accuracy in multi-modal alignment.\\n\\nThis structured feedback should provide the authors with a comprehensive view of the strengths and areas for enhancement in their paper on TRACE.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reply to Reviewer MMw2 (2/2)\", \"comment\": \"> Some clarity is not clear. For example, the paper does not adequately explain slot-based compression, which is not a widely known technique. Moreover, compressing each frame to just 8 visual tokens might lead to significant information loss, raising concerns about the trade-off between efficiency and accuracy.\\n> \\n\\nThank you for your valuable suggestion! We would like to provide the following clarification\\n\\n- *We compress the visual tokens to address efficiency and context length limitations.* Since TRACE samples 128 frames, without compression, the ViT would produce over 70K visual tokens. To handle this, we compress the visual tokens to 8 tokens per frame, resulting in a total of 1,792 visual tokens after incorporating the time tokens corresponding to each frame. This compression allows us to effectively handle VTG tasks within the 4K context length limit.\\n- *We choose slot-based compression for its lightweight architecture and high performance on VTG tasks.* Introduced by [5], slot-based compression uses only one-third of the parameters of a single cross-attention layer, while outperforming both cross-attention and sampling-based methods on VTG tasks.\\n- As per your recommendation, we have conducted ablation studies on the number of tokens per frame. **However, the training will take more than a week to complete. We will post the results here once the ablation training is finished.**\\n\\n> It is unclear whether the same set of number tokens is used for both timestamps and scores. If so, this could blend the two types of information, contradicting the authors' claim (lines 45\\u201346) that the model preserves the distinct structure of video events.\\n> \\n\\nWe apologize for any confusion. **To clarify, timestamps and scores in TRACE use separate sets of tokens.** As explained in lines 235-245, the timestamps and scores are processed by distinct encoder-decoder pairs, though both pairs share the same model structural design.\\n\\n[1] Vtimellm: Empower llm to grasp video moments. CVPR 2024.\\n\\n[2] Timechat: A time-sensitive multimodal large language model for long video understanding. CVPR 2024.\\n\\n[3] Lita: Language instructed temporal-localization assistant. ECCV 2024.\\n\\n[4] Momentor: Advancing Video Large Language Model with Fine-Grained Temporal Reasoning. ICML 2024.\\n\\n[5] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding. Arxiv 2024.\"}",
"{\"title\": \"Further Reply by Reviewer YJgy\", \"comment\": \"Thanks for the further reply.\", \"a_minor_mistake\": \"On lines 1253-1254, I noticed that the training data for TRACE includes Next-QA (a temporal reasoning dataset). Hence, the statement \\\"despite not being trained on large-scale causal reasoning datasets\\\" needs to be modified.\\nHowever, compared to VideoChat2, TRACE uses fewer training data (e.g. CLEVRER), so the experimental results still prove the effectiveness of TRACE's causal modeling method.\\n\\nMy main concerns have been addressed, and I hope the authors ensure that this analysis section will be included in the final version. I believe that TRACE will bring insights to the field of video understanding and provide inspiration for causal modeling in VideoLLMs. \\n\\nIn conclusion, I recommend accepting this paper.\"}",
"{\"summary\": \"This paper proposes a new Video Temporal Grounding (VTG) method that addresses the shortcomings of existing LLM in handling VTG tasks by modeling causal events.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The author first employs a causal modeling method in the grounding of VLLM, achieving causal probability modeling through the sequence of input tokens. This approach will provide inspiration for future work on video understanding tasks using VLLM.\", \"weaknesses\": \"1. **Autoregressive modeling.**\\n\\nOne of my major concerns is that the authors have only used the earlier events $e_{1:k-1}$ in their modeling of causal relationships between events through autoregression, without incorporating the equally known $e_{k+1:K}$. I believe this approach may be unreasonable since it is likely that the same events may occur earlier while the current event is different due to unrelated pretexts. However, this issue can be avoided by modeling different subsequent events simultaneously. Besides, most current video understanding researchers have modeled multiple events by utilizing all contextual events that occur before and after them [1-4]. This may require the authors to provide further explanation.\\n\\n[1] Liang C, et al. Visual abductive reasoning[C]. CVPR 2022.\\n\\n[2] Lam T E, et al. CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes. NeurIPS 2024.\\n\\n[3] Chen T, et al. MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning. NeurIPS 2024.\\n\\n[4] Du Y, et al. Towards Event-oriented Long Video Understanding. ArXiv 2024.\\n\\n2. **Inference speed.**\\n\\nThe authors have adopted a form similar to autoregression, and I would like to understand if there is a time overhead in comparing their model's inference speed to that of current mainstream LLMs.\\n\\n3. **LLM backbone.**\\n\\nI noticed that the authors used Mistral-7B as the LLM backbone, however, in other comparison methods, Timechat used LLaMA-2, while HawkEye, Momentor, and VTimeLLM used Vicuna. I would like to know if the authors have conducted experiments with LLaMA-2 or Vicuna as the LLM backbone, to ensure that the superior performance is not due to the better LLM backbone but rather the causal modeling.\", \"questions\": \"My main concern is with the autoregressive modeling approach, and if the authors can provide a reasonable explanation, I am willing to consider raising my score, as I believe this work could provide inspiration for future work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks to the author for further reply. In my opinion, it is necessary to conduct a comparison and further analysis between VideoLLM-based causal modeling and traditional video understanding model, which will help readers better understand this autoregressive causal modeling and make the causal method proposed in this paper more rigorous in theory. I look forward to the further discussion of video causal reasoning work [1-4, 6-9] etc. by the author in the revised draft. As such, I think this article will provide insights not only for simple VTG task, but also for the field of video understanding, and I am willing to raise my score to 8 if the further analysis is conducted soundly.\"}",
"{\"title\": \"Official Comment by Reviewer YJgy\", \"comment\": \"Thanks for the further claim, part of my concerns has been addressed. However, given previous work on combining causal modeling with video understanding, I still believe that modeling only earlier events represents a suboptimal modeling approach. **Despite recognizing that this limitation may be attributed to the inherent properties of decoder-only LLMs and acknowledging the authors' innovative and insightful endeavors, I remain skeptical of the theoretical rigor.**\\n\\nAccording to [1-4], as well as [6-9], modeling the complete causal relationship between events is essential for comprehensive video content understanding. Moreover, as indicated in [6], the video's Predictive Questions Test is a crucial indicator of causality. Therefore, as Reviewer p7QK, MMw2 points out, I believe that the modeling method of TRACE may not perform well in tasks such as VQA, particularly in predictive questions. This limitation reduces its inspiration to other video understanding tasks, which I initially recognized and appreciated about this work (contribution to video understanding + causal reasoning).\\n\\nMoreover, I believe there is a significant correlation between VTG, VQA, and DVC, rather than orthogonality. I remain firm in my belief that the approach to causal modeling needs to be discussed with current video understanding + causal reasoning works in [1-4] and [6-9], either in the related work or other sections.\\n\\nTherefore, I maintain my current rating.\\n\\n[6] Clevrer: Collision events for video representation and reasoning. ICLR 2020.\\n\\n[7] CATER: A diagnostic dataset for Compositional Actions and Temporal Reasoning. ICLR 2020.\\n\\n[8] Causal discovery in physical systems from videos. NeurIPS 2020.\\n\\n[9] Complex Video Action Reasoning via Learnable Markov Logic Network. CVPR 2022.\"}",
"{\"title\": \"Further Clarification to Reviewer MMw2 (2/2)\", \"comment\": \"> However, the additional results mainly show that increasing the number of tokens improves performance. This neither demonstrates the advantages of the proposed approach over established techniques such as Q-Former or 2D Average Pooling nor suggests a number of 8/16 tokens per frame is enough for video modeling.\\n> \\n\\nWe are sorry for the misunderstanding, and would like to clarify the following points:\\n\\n- **We would like to clarify that the slot-based compression method was not proposed by this paper.** This method was introduced in [5], and we agree with the reviewer that alternative approaches may offer better performance. However, we believe that further verifying the effectiveness of slot-based compression over other methods is responsible for [5] instead of this paper.\\n- We believe it is clear that increasing the number of tokens per frame leads to better model performance. **If computational resources and the context length permit, we recommend avoiding visual token compression.**\\n- **The main contribution of this paper lies in the causal event modeling framework** **and the task-interleaved structure.** We believe this contribution is orthogonal to the design of compression layers.\\n\\n> While the results on MVBench and VideoMME are promising, they remain significantly behind the performance of popular models like LLaVA-Onevision or Qwen2-VL.\\n> \\n\\nThank you for the detailed comments. We believe it is **unfair** to expect TRACE to achieve comparable performance to popular models like LLaVA-Onevision or Qwen2-VL on general video understanding tasks for the following reasons:\\n\\n- **LLaVA-Onevision and Qwen2-VL leverage more advanced LLM backbones (such as Qwen2), larger training datasets (over 5 million samples), and longer context lengths (greater than 8K).** These factors significantly contribute to their stronger benchmark performance. In contrast, our model is trained with a 4K context length, uses the Mistral-7B-v0.2 LLM backbone, and relies on a dataset of just 2 million SFT examples, of which only 1 million are general video understanding samples (TRACE-uni). Given these differences, we believe the comparison is not entirely fair.\\n- **TRACE-uni has achieved performance on par with, or even surpassing, VideoLLama2 (which uses the same LLM backbone and vision encoder)** on general video understanding tasks. Moreover, TRACE-uni was trained **with only about 1M general video understanding task training data**, a significantly smaller dataset compared to VideoLLama2.\\n\\nThank you once again for your thoughtful reviews. We hope the discussions and clarifications provided above address your concerns.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Further Clarification to Reviewer MMw2 (1/2)\", \"comment\": [\"We thank the reviewer for their further response. However, we would like to provide additional clarifications and will incorporate the following discussions and explanations in the revised paper.\", \"> One key concern is that the main argument regarding causal event modeling is still weakly supported. It remains unclear why the authors chose to focus on causal event modeling as the primary approach for structuring video representations. Videos inherently comprise diverse components\\u2014such as objects, backgrounds, actions, and interactions\\u2014that extend beyond salient scores and timestamps. While I understand the authors\\u2019 intention to draw inspiration from causal language modeling, the analogy appears to lack a solid foundation. Unlike language, which is relatively homogeneous and well-suited to the next-token prediction paradigm, the relationship between language, salient scores, and timestamps is less evident.\", \"Thank you for the insightful discussion, and we agree with the reviewer that videos contain many components. However, we would like to clarify the following points, and will incorporate the discussion in the revised paper:\", \"**We believe that the event triplet (timestamp, score, language) naturally arises in video LLM responses**, as every video content, including the objects, actions, and interactions mentioned by the reviewers, inherently has a temporal component (i.e., a happening time). The input query and the corresponding video content will also have associated scores.\", \"**We would like to clarify that the primary focus of this paper is on VTG tasks**, which is why we concentrate mainly on temporal aspects, specifically timestamps and salient scores. For other aspects, such as objects, actions, and interactions, we continue to rely on language modeling. *While future work could explore separate or causal modeling for these components, as well as extending the framework to areas like object detection and interaction recognition, these topics may beyond the scope of our current study.*\", \"We have provided additional discussion on the benefits of our framework in lines 1229-1241 of the revised paper. In summary, the key improvement of causal event modeling over causal language modeling lies in the structure of the model's responses: (1) Explicit cross-triplet correlations; (2) Enabling independent modeling of timestamps, scores, and text.\", \"> The necessity of a dedicated time/score head\", \">\", \"Thank you for your comments. We would like to clarify the following points:\", \"**Using separate time tokens can improve the performance of VTG tasks**, a finding that has been widely verified in previous studies [3, 4, 5, 7]. These studies demonstrate that explicitly modeling temporal information with dedicated tokens enhances the model's ability to handle time-dependent tasks effectively.\", \"**As pointed out by the reviewer, language is relatively homogeneous and well-suited to next-token prediction, whereas timestamps and scores may not align well with the language space.** Previous studies [5] have shown that adding time tokens to the language model can hurt language generation capabilities. To address this issue, we model language, timestamps, and scores in separate spaces, and decoding them one by one, thereby avoiding this weakness.\", \"**We have conducted ablation experiments**\", \"Using causal language modeling with only the text tokenizer/head.\", \"Using causal event modeling without separate, dedicated time/score heads.\"], \"as_shown_in_table_3_of_our_submission\": [\"Using causal language modeling significantly reduces model performance, resulting in a 2.6-point drop in the CIDEr metric on YouCook2 and a 7.3% performance drop in the R@1$_{IOU=0.5}$ metric for Charades-STA.\", \"When using causal event modeling with a shared head for timestamps, scores, and language, the pretrained knowledge of the LLM is disrupted. This disruption causes the model to fail in following instructions and prevents it from completing the evaluation task.\", \"[7] Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. CVPR 2023.\"]}",
"{\"title\": \"Additional Evaluation Results\", \"comment\": \"Dear reviewer MMw2,\\n\\nTo further clarify our points, we evaluate Qwen2-VL on the **event-level video understanding benchmark** **E.T.Bench** [8]. Our results show that, *despite being trained with less data, shorter context, and an older LLM backbone*\\n\\n- On the **E.T. Bench**, TRACE\\n - outperforms VideoLlama2 across all tasks;\\n - achieves performance comparable to GPT-4o on RAR, ECA, RVQ, and DVC tasks;\\n - achieves similar performance to Qwen2-VL on RVQ and GVQ tasks;\\n - outperforms both GPT-4o and Qwen2-VL on TVG, EPM, TAL, EVS, SLC, and TEM tasks.\\n- While **Qwen2-VL** demonstrates advanced performance on multiple-choice QA tasks such as the RAR, ECA, and RVQ evaluations from E.T.Bench, outperforming even GPT-4. But its performance on other tasks, ranging from TVG to TEM, remains subpar. This highlights the ongoing challenge of developing a generalist video LLM capable of effectively handling a wide variety of video understanding tasks.\\n\\n| E.T.Bench | RAR,Acc | ECA,Acc | RVQ,Acc | TVG,F1 | EPM,F1 | TAL,F1 | EVS,F1 | VHD,F1 | DVC,F1 | DVC,Sim | SLC,F1 | SLC,Sim | TEM,Rec | GVQ,Acc |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| VideoLLama2 (7B) | 28.8 | 27.4 | 28.0 | 0.1 | 0.0 | 0.0 | 0.0 | 1.5 | 0.6 | 14.5 | 0.0 | 15.2 | 0.0 | - |\\n| Qwen2-VL (7B) | 39.4 | 34.8 | 42.2 | 3.9\\u00a0 | 0.1\\u00a0 | 0.3\\u00a0 | 0.4\\u00a0 | 20.6 | 0.0 | 0.0 | 0.0 | 0.0 | 6.6 | 55.9 |\\n| GPT-4o | 27.8 | 27.3 | 57.7 | 40.4 | 4.5 | 20.0 | 17.6 | 56.9 | 46.9 | 22.3 | 23.1 | 14.9 | 13.6 | - |\\n| TRACE (7B) | 29.4\\u00a0 | 28.8\\u00a0 | 42.6\\u00a0 | 46.8\\u00a0 | 12.3\\u00a0 | 21.6\\u00a0 | 26.6\\u00a0 | 45.2 | 45.7\\u00a0 | 24.0\\u00a0 | 27.3\\u00a0 | 17.7\\u00a0 | 17.8\\u00a0 | 52.4 |\\n\\nWe hope the additional numerical results address your concerns. Please don't hesitate to reach out if you have any further questions. Thank you again for your review.\\n\\n[8] E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding. NeurIPS 2024.\"}",
"{\"metareview\": \"The paper receives 3 positive and 1 negative ratings after rebuttal, with 3 upgraded scores. Initially, the reviewers had several concerns about some technical clarity, motivations of using timestamps and scores, more contexts with the relevant work, more analysis on model parameters, extending to other video tasks, comparisons with multimodal LLMs (e.g., VideoLLama2). In the post-rebuttal discussion period, three reviewers were satisfactory with the authors' comments and raised the rating. After taking a close look at the paper, rebuttal, and discussions, the AC agrees with reviewers' feedback of the proposed method being effective and significant as a LLM foundation for video temporal grounding. Therefore, the AC recommends the acceptance rating.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal, most critical concerns from the reviewer KYwb, p7QK, and YJgy, about technical clarity and more experimental results (e.g., comparisons with other video LLMs, other video tasks) are well received by the reviewers. Moreover, for the reviewer MMw2 who still provides the negative rating, the main concerns are on the usage of timestamps/scores and comparisons with Qwen2-VL. The AC took a close look at the rebuttal, discussions, and responses, in which the AC finds that the raised issues are addressed well by the authors in the rebuttal.\"}",
"{\"comment\": \"Dear reviewer KYwb,\\n\\nThank you for raising the score! Your detailed review has been invaluable in helping us improve the quality of the paper.\"}",
"{\"title\": \"Reply to the authors' response\", \"comment\": \"I have no further questions and tend to increase the rating.\"}"
]
} |
14E7S17hFv | Counterintuitive RL: The Hidden Value of Acting Bad | [
"Ezgi Korkmaz"
] | Learning to make sequential decisions solely from interacting with an environment without any supervision has been achieved by the initial installation of deep neural networks as function approximators to represent and learn a value function in high-dimensional MDPs. Reinforcement learning policies face exponentially growing state spaces in experience collection in high dimensional MDPs resulting in a dichotomy between computational complexity and policy success. In our paper we focus on the agent’s interaction with the environment in a high-dimensional MDP during the learning phase and we introduce a theoretically-founded novel method based on experiences obtained through extremum actions. Our analysis and method provides a theoretical basis for effective, accelerated and efficient experience collection, and further comes with zero additional computational cost while leading to significant acceleration of training in deep reinforcement learning. We conduct extensive experiments in the Arcade Learning Environment with high-dimensional state representation MDPs. We demonstrate that our technique improves the human normalized median scores of Arcade Learning Environment by 248% in the low-data regime. | [
"Counterintuitive",
"reinforcement learning"
] | Reject | https://openreview.net/pdf?id=14E7S17hFv | https://openreview.net/forum?id=14E7S17hFv | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wrtmYEdMLN",
"uwgMDg884F",
"ttnL11qHPp",
"lU0y46LXaG",
"kzOHGy4SwR",
"hVgkKxRTBu",
"aWtuLDwgp8",
"ZUVhsVwLCZ",
"U39ftyu3yS",
"SDD75GTRoI",
"HyN1HKpc1t",
"Gi4qO8Wtoz",
"Cy5XQacDuy",
"Cx3XSuPxv3",
"Ci39xSjgqR",
"BAUhLadSgT",
"AwqtqFLqIb",
"9x1xaI9YMx",
"5UiNQN3GqB",
"3VaeQFUEWM",
"0aZ2lHP9Tm"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"meta_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732500957629,
1732533951909,
1732284541503,
1732283903792,
1732285146696,
1732515606087,
1732529360219,
1733161782664,
1732571025697,
1730299589105,
1732620876118,
1732626572915,
1732569877809,
1730494206166,
1737524070898,
1734755579335,
1730712391792,
1730692299117,
1732624362447,
1732283433655,
1732284756110
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_BH4h"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_o2WG"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_qwcj"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_o2WG"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_qwcj"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_EiPG"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_qwcj"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_o2WG"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10690/Area_Chair_A5RP"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_EiPG"
],
[
"ICLR.cc/2025/Conference/Submission10690/Reviewer_BH4h"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10690/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Reply to the authors\", \"comment\": \"Thank you for clarifying concerns 1 and 3 above. I see why in DRL you are not concerned with regret because it is in a game setting and you can normalize the scores. I also appreciate the comparison with CFN.\\n\\nI dont see the fix for $\\\\mathcal D(s)$ in the latest version of paper though. Can you please fix that?\\n\\nAlso, can you please answer why you have $s' \\\\sim \\\\mathcal T(s, \\\\hat{a})$ on line 155-156 in the first expectation? I don't see where the transition is coming into the equation maybe I am missing something.\"}",
"{\"title\": \"Reply to the Reviewer\", \"comment\": \"Thank you very much for your response. On line 155-156 it is indeed not necessary to have $s\\u2019 \\\\sim \\\\mathcal{T}(s,a)$ for the first expectation. Now, we have fixed the typos you mentioned, and thank you again for pointing these typos out.\"}",
"{\"title\": \"Author Response Part I\", \"comment\": \"Thank you for stating that our paper\\u2019s approach to analyze temporal difference and our proposal that is directly useful for structural/temporal credit assignment is interesting while our paper provides experimental analysis ranging from a toy MDP problem to 100K and 200M Atari benchmarks.\\n\\n---\\n\\n**1.** *\\u201dSection 5 suggests that experiments were also done with Double DQN (with PER), however all I could find were learning curves for QRDQN, and Table 1 with results for DDQN. Generally, figuring out what results are based on which basis algorithm was challenging as I had to go back to the text several times without success. Could you please clarify what basis agent/algorithm each plot/table is based \\u201c*\\n\\n---\\n\\nFigure 5 reports the results for Double DQN and Figure 2 reports results for QRDQN. This is also explained in Line 411 and 415. But we can further refer to it in multiple places to give a more smooth reading experience.\\n\\n---\\n\\n**2.** *\\u201dConsider the example above again: The TD error reaches zero/near-zero for the Q-minimizing and Q-maximizing actions after a few updates (where Q is the current estimator and not the true Q-function). However, all other actions will have a much higher TD error since no updates is performed on them. This effectively shows that while the results of the Propositions in the paper could hold probabilistically assuming uniform initialization of the outputs of the Q-estimator network, they do not hold on a case by case setting nor do would they hold after several updates (after the uniformity of the outputs is no longer the case).\\u201d*\\n\\n---\\n\\nFigure 4 demonstrates that the results of Propositions indeed hold not only after several updates but throughout the entire training up to convergence.\\n\\n---\\n\\n**3.** *\\u201dResults on Atari 100K are significant, but not on Atari 200M experiments\\u201d*\\n\\n---\", \"200_million_frame_training_in_tennis\": \"Maxmin TD learning achieves the score of +5.448579367380699, the canonical methods obtain -6.71547619047, this is a 223.25% increase in performance.\", \"200_million_frame_training_in_gravitar\": \"Maxmin TD learning achieves the score of 388.701902, and the canonical methods obtain 295.26349, this is a 31.64% increase in performance.\", \"200_million_frame_training_in_surround\": \"Maxmin TD learning achieves the score of -6.6511238, and the canonical methods obtain -9.442219495, this is a 41.96% increase in performance.\", \"200_million_frame_training_in_jamesbond\": \"Maxmin TD learning achieves the score of 972.579366, and the canonical methods obtains 769.9060246, this is a 26.3% increase in performance.\\n\\nSimilarly the increase in performance achieved by MaxMin TD learning in 200 million frame training persists across many games as reported in Figure 3, and stating that the increase in performance in 200 million frame training is insignificant is not a correct or fair assessment given the results provided in our paper.\\n\\n---\\n\\n**4.** *\\u201dWhat was the reasoning behind choosing the specific subset of games for the Atari 200M experiments.\\u201d*\\n\\n---\\n\\nNote that Figure 3 targets the games that are not part of the ALE 100K benchmark, as well as games which are part of the Arcade Learning Environment 100K benchmark to provide more comprehensive results. In particular, note that Gravitar, Surround, Bowling, StarGunner, and Tennis are not in the ALE 100K benchmark. Thus, these results provide more insight into what we can expect for the games that are not part of the ALE 100K benchmark. Furthermore, please note that some of these games in Figure 3 are also considered to be hard exploration games. \\n\\n---\\n\\n**5.** *\\u201cCan you comment on the counterexample that I've mentioned in the \\\"Weaknesses\\\" section? What is your view on it? (Perhaps experimenting with such a setting would be useful.)\\u201d*\\n\\n---\\n\\nAs our extensive empirical analysis demonstrates, the contrived tabular counterexample you give does not seem to be an issue at all in deep reinforcement learning or in the tabular Chain MDP. This is because the learning dynamics itself, as our paper demonstrates, has sufficient randomness in it. Note that the standard convergence analysis of Q-learning requires only that every state-action pair is visited infinitely often. Hence, with sufficient noise in the learning dynamics this condition is also immediately satisfied via MaxMin TD, which indeed is the case as can be seen from the empirical analysis. If one is seriously concerned about contrived counterexample environments one can inject slight noise directly to MaxMin TD and immediately resolve the example.\\n\\n\\n\\n---\\n\\n**6.** *\\u201dWhy are the curves for Atari 200M truncated in some cases?\\u201d*\\n\\n---\\n\\nIn the cases where the policy converges earlier than 200 million frames, the figures were reported in zoomed versions to directly and clearly demonstrate the early convergence achieved.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for stating that our paper is largely well written with a good set of experimental results while improving performance.\\n\\n---\\n\\n**1.** *\\u201dIn the experimental section I would be interested to see comparison with two other papers: \\\"Exploration with random network distillation\\\" and \\\"Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning\\\". Former is based on quantifying how novel a data point is and the latter is directly related to optimistically choosing an action based on pseudo counts. You refer to the inability of doing count based exploration (as done in tabular setting) in your paper but these works are doing some form of counts based exploration. For reference, in lines 126-128 you write.\\u201d*\\n\\n---\\n\\nPlease note that both Random Network Distillation (RND) and Coin Flip Network utilize separate independent neural networks, besides the standard Q-Network to learn and predict independent metrics, i.e. RND utilizes additional recurrent neural networks. This is much different than the goal and objective of our paper which is to achieve sample efficiency across all games with zero additional cost. Our algorithm does not employ any extra networks to predict or separately learn anything.\\n\\nIt is important to emphasize that neither of these prior papers on exploration in deep reinforcement learning claim to lead to faster learning. Rather both are only able to demonstrate an advantage over standard techniques in a few \\u201chard exploration games,\\u201d and further require much larger numbers of environment interactions than what is focused on our paper, e.g. RND requires 1.97 billion frame training.\\n\\nIn particular, RND (Random Network Distillation) is only tested in 6 hard-exploration games Montezuma\\u2019s Revenge, Pitfall, Solaris, Venture, and Private Eye. RND outperforms prior methods in only 3 of these games, and requires 1.97 billion frames of training to do so. Our results for faster learning via MaxMin TD instead require only 100K interactions (i.e. 400K frames) of training.\\n\\nSimilarly, the flipping coins exploration method is tested in only 1 Atari game, Montezuma\\u2019s Revenge, where again the focus is on 200 million frame training. Thus, as these methods involve training additional neural networks to learn and estimate pseudocounts alongside standard RL methods, their advantages tend to only appear in certain hard-exploration settings where large numbers of frames of training are available in order to allow the uncertainty estimation networks to learn accurate estimates.\\n\\nNonetheless, we still tested and added results in comparison to the CFN method from the paper \\u201cFlipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning\\u201d [1] in the supplementary material. The results demonstrate that MaxMin TD learning substantially outperforms both the canonical and recent methods. \\n\\n[1] Sam Lobel, Akhil Bagaria, George Konidaris. Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning, ICML 2023.\\n\\n[2] Yuri Burda, Harrison Edwards, Amos Storkey, Oleg Klimov. Exploration by random network distillation, ICLR 2019. \\n\\n---\\n\\n**2.** *In Proposition 3.4 and 3.6 you start with statements for state $s_t$, which is random variable corresponding to state at time $t$ but in the inequality on the RHS you somehow have $\\\\mathcal{D}(s)$ for a fixed state $s$. I am unclear as to where this $s$ is coming from. Moreover, I believe this would significantly complicate the proofs because you will have to account for the time step or \\\"loosen\\\" the lower bound because you will have to take some kind of infimum.*\\n\\n---\\n\\nThank you for pointing this out. This is a typo and it should be $\\\\mathcal{D}(s_t)$ not $\\\\mathcal{D}(s)$.\\n\\n---\\n\\n**3.** *\\u201cWhile I am able to follow intuitively why you would benefit from taking the value minimizing action, I believe you should also include a comment on the estimated regret for this choice. It might be beneficial for a randomly initialized $Q$-function in a game setting but we must consider cases where large negative rewards are \\\"harmful\\\" to the agent. This is one of the reasons why minimizing regret is important to both theoreticians and practitioners.\\u201d*\\n\\n---\\n\\nIn deep reinforcement learning rewards are typically normalized to the interval $[0,1]$ to stabilize training and allow these models to converge. Our method is designed for the deep reinforcement learning setting, rather than the online setting where every action may be associated with a cost, i.e. a large negative reward, and regret is the key metric. Thus, while minimizing regret online is a very important goal, both in theory and practice, it is not the objective of the significant body of work in deep reinforcement learning that our paper is a part of. We can definitely add a comment about regret.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for stating that our paper provides both theoretical and empirical analysis while our approach addresses sample efficiency, and further with the empirical analysis conducted in the well-known Atari benchmark, offering tasks with various characteristics.\\n\\n---\\n\\n**1.** *Marginal novelty: The proposed method introduces limited novelty since exploring different selection criteria based on Q-estimations has been previously explored with ensembles [1, 2]. Additionally, similar work addressing the optimistic/pessimistic policy update trade-off exists using a more task-dependent strategy [3].*\\n\\n---\\n\\nWe believe you have substantial confusions regarding understanding the papers you refer to [1,2,3]. But this is not at all something that can be immediately addressed. \\n\\nThe paper [1] employs multiple Q functions and minimizes over the Q functions. This is a completely different concept than our work in which our algorithm has one Q-network and minimization is done over actions for a given state not over multiple Q functions. This paper [2] is about offline reinforcement learning. This is a completely different concept/setup/subfield than our paper. Paper [3] learns a belief distribution over possible Q functions for actor-critic, again this is not relevant to our algorithm or our paper. \\n\\n[1] Qingfeng Lan, Yangchen Pan, Alona Fyshe, Martha White. Maxmin Q-learning: Controlling the Estimation Bias of Q-learning, ICLR 2020. \\n\\n[2] Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi. An Optimistic Perspective on Offline Reinforcement Learning, ICML 2020.\\n\\n[3] Ted Moskovitz, Jack Parker-Holder, Aldo Pacchiano, Michael Arbel, Michael I. Jordan. Tactical Optimism and Pessimism for Deep Reinforcement Learning, NeurIPS 2021. \\n\\n---\\n\\n**2.** *Fair comparison: A more balanced comparison would be to benchmark MaxMin TD Learning against alternative approaches designed to enhance sample efficiency as seen in [4, 5]. The authors could emphasize the benefit of MaxMin TD Learning, such as enabling effective learning without requiring prior logged data or a guide policy, which could potentially lead to distribution shifts.*\\n\\n---\\n\\nThe paper [4] is about model based exploration, our paper is about increasing the temporal difference without any additional networks and models that learn additional metrics. The paper [5] is about using offline reinforcement learning and our paper is about off-policy reinforcement learning. These are completely different concepts and as such almost different subfields in reinforcement learning, thus there is no relevance of these studies to our paper. \\n\\n[4] Yao Yao, Li Xiao, Zhicheng An, Wanpeng Zhang, Dijun Luo. Sample Efficient Reinforcement Learning via Model-Ensemble Exploration and Exploitation, ICRA 2021.\\n\\n[5] Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Jos\\u00e9phine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman. Jump-Start Reinforcement Learning, ICML 2023.\\n\\n---\\n\\n**3.** *Could the authors clarify the decay rate for the hyperparameter $\\\\epsilon$? It seems that Exploration epsilon decay frame fraction decreases every 0.8% of the total interaction budget. How was this value selected?*\\n\\n---\\n\\nHyperparameters are set to the exact same values with prior studies to provide a fair and transparent comparison. These hyperparameters are also reported and explained in detail in the supplementary material.\\n\\n---\\n\\n**4.** *Could you provide more details on how NoisyNetworks was implemented in the experiments? Clarifying architecture choices and how this was selected would be useful, as it apparently allows for a range of configurations.*\\n\\n---\\nThe exact implementation of the prior studies [1] is used to provide consistent and fair comparison.\\n\\n[1] When to use parametric models in reinforcement learning?, NeurIPS 2019.\\n\\n---\\n\\n**5.** *In Figure 1, constant 2 appears to balance exploration and exploitation in the UCB algorithm. Has this constant been optimized for the problem, and if so, could the results for varying values be shown? I'd like to see results for values such as [0.5, 1, 2, 3] as done for the epsilon value.*\\n\\n---\\n\\nThe value of the constant in UCB was actually in fact tuned to its best performing value which was 0.5 in these experiments. Other values in the interval $[0.5,3]$ performed worse, which results in slower convergence for larger values which can be found in the supplementary material.\\n\\n---\\n\\n**6.** *Could the authors clarify the intended interpretation of Figure 4 for the reader? What is exactly the meaning of TD error being stable or suffering from a drop during environment interactions? What are \\\"high\\\" negative or positive values in this case?*\\n\\n---\\n\\nIn temporal difference learning targeting higher temporal difference, i.e. TD, will lead to faster learning. Hence, Figure 4 reports temporal difference results. The results reported in Figure 4 demonstrate that MaxMin TD learning leads to higher temporal difference throughout the training.\"}",
"{\"comment\": \"Thanks for your response. Below I will comment on your responses.\\n\\n1. Please do add that Fig. 5 is based on Double DQN in the caption.\\n\\n2. Could you clarify how Fig. 4 shows this?\\n\\n3. \\n- Could you clarify how the 223.25% increase was computed? (\\\"200 million frame training in Tennis: Maxmin TD learning achieves the score of +5.448579367380699, the canonical methods obtain -6.71547619047, this is a 223.25% increase in performance.\\\")\\n\\n- Figure 3: Considering the confidence intervals, the results are not significant. The CIs are overlapping, with minor improvements in the mean performance levels. Also, let's consider for instance the game of Gravitar. Known results for DQN in this game reach a score of ~1300 (see, e.g., https://google.github.io/dopamine/baselines/atari/plots.html). However, the performance shown is ~300 for the baseline. When results are significantly below the known results, the variations in performance could come from slight implementation details or simply just insufficient number of trials. Could you clarify the number of seeds used and the basis agent used in Fig. 3? What basis implementation is being used?\\n\\n- Also, when we discuss performance improvements, it is very important to settle what we mean by performance first. It could be the mean performance at 200M frames, mean performance over the last 5M frames, the Area-Under-the-Curve (AUC), and so on. By truncating the X-axis at arbitrary points and using that performance for such assessments is simply scientifically incorrect.\\n\\n4. Gravitar is the only hard exploration game in the set that I know of (see, e.g., Figure 4 of Oh et al. (2018) \\\"Self-Imitation Learning\\\"). However, I'm not familiar with Surround. But from what I can see in other papers, performance of -7.5 (close to what MaxMin TD Learning is obtaining) is on par with the performance of the random policy. Citing from Badia et al. (2020) \\\"Agent57: Outperforming the human Atari benchmark\\\": \\\"For example, in the game Surround R2D2 achieves the optimal score while NGU performs similar to a random policy\\\" where NGU's performance is provided in the mentioned paper's H.1 Table.\\n\\n5. This unfortunately did not address my question. You mentioned: \\\"If one is seriously concerned about contrived counterexample environments one can inject slight noise directly to MaxMin TD and immediately resolve the example.\\\" However, the question was: could you give me a solid reason not to be worried about having to inject additional noise?\\n\\n6. When we discuss performance improvements, it is very important to settle what we mean by performance first. It could be the mean performance at 200M frames, mean performance over the last 5M frames, the Area-Under-the-Curve (AUC), and so on. By truncating the X-axis at different points and using that performance for such assessments is simply scientifically incorrect. \\n\\n7. My grounding was the same as that you asserted. But could you clarify why an algorithm for action-selection should be categorized as a TD method?\\n\\n8. No comment \\n\\n9. No comment\\n\\n10. What I'm saying is that the X-axis label should be 200M *frames*. But okay.\"}",
"{\"comment\": \"Thank you for your response.\\n\\n*We believe you have substantial confusions regarding understanding the papers you refer to [1,2,3].*\\n\\nI disagree with the assertion. Several components of the algorithm align conceptually with Offline RL methods, particularly in substituting the *argmax* operator with more conservative alternatives. Furthermore, I do not see any inherent limitation in applying these algorithms to online settings. For instance, paper [2] explicitly addresses this in Section 5.3 and provides supporting results in Figure 5. Similarly, [1] presents relevant results in Figure 3. However, I acknowledge that MaxMin TD Learning, as it encompasses both exploration and learning processes, belongs to a distinct category. This distinction, in my view, is the core issue with the paper. As noted by reviewer o2WG, the claim that this approach constitutes a **TD method** is assessed against an **exploration** strategy in a **low-data regime**, but the comparison is not fair under these circumstances.\\n\\nWhile I agree that [4, 5] are not suitable baselines for comparison due to their reliance on prior knowledge, I see no compelling reason to exclude comparisons with [1] and [2]. Specifically, it seems feasible to evaluate their learning processes both with and without the exploration strategy proposed by MaxMin TD Learning. Such a comparison could offer valuable insights into the performance of MaxMin TD Learning's exploration strategy. (1) If it demonstrates performance improvements over a na\\u00efve $\\\\epsilon$-greedy approach, this would provide evidence of its effectiveness. (2) If demonstrate performance improvements employing the MaxMin TD Learning exploration strategy, this would provide evidence of the learning method's effectiveness.\\n\\nLastly, I appreciate the inclusion of UCB results with varying constants in the appendix. However, for clarity, I recommend consolidating these results into a single plot (Appendix - 5.1, Figure 3), only showing UCB results. This would enhance the interpretability.\"}",
"{\"title\": \"Thank You\", \"comment\": \"We greatly appreciate you taking the time to review our paper. We trust that our response has addressed your questions. We wanted to ask if it would be possible for you to reassess your initial review in the light of our clarifications?\\n\\nThank you again.\\n\\nKind regards,\\n\\nAuthors\"}",
"{\"comment\": \"Thanks for your response. Some of my questions where not responded to (e.g. 3.1 and more), but for those responded I think I now have to take another look at the paper and other reviewers' comments to re-evaluate my assessment.\\n\\nThe results in DQN and Double DQN are based on the deterministic Atari suite. I'm assuming your comparison is also on the deterministic version, and not the current best practices of using sticky actions to induce stochasticity in the transition dynamics?\", \"regarding_7\": \"Unfortunately this would require a rewrite of the paper, and not as simple as changing the algorithm name. But I will reassess the paper's exposition in its current status once again.\\n\\nThanks for the AUC results. One important point was that, not only I needed clarification regarding the answers but also the paper needs revisions regarding its result reports. Number of seeds, name of basis algorithm easily attached to results (instead of \\\"the canonical method\\\"), non-truncated graphs, etc. If you can make these changes before the deadline for submitting rebuttal revisions, it would help me (and I'm sure other readers and reviewers) in re-assessing the work and its significance.\"}",
"{\"summary\": \"The authors propose MaxMin TD Learning, an algorithm that alternates between optimistic and pessimistic strategies for action sampling and Q-estimation updates. This approach addresses sample inefficiency in deep reinforcement learning (RL), though the method offers only incremental novelty. The authors provide both theoretical and empirical analysis, evaluating their method on the popular Atari benchmark.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors tested their approach using both DDQN and a more recent approach that represents the possible returns as distribution (QRDQN), which shows the flexibility of applying MaxMin TD Learning over a variety of different algorithms. Moreover, the tested environment is the well-known Atari benchmark, offering tasks with various characteristics. The paper is relatively easy to follow.\", \"weaknesses\": \"* *Marginal novelty*: The proposed method introduces limited novelty since exploring different selection criteria based on Q-estimations has been previously explored with ensembles [1, 2]. Additionally, similar work addressing the optimistic/pessimistic policy update trade-off exists using a more task-dependent strategy [3]. A Related Works section would help clarify where the proposed method advances existing literature. Furthermore, count-based exploration strategies should be referenced in the Background section for completeness.\\n\\n* *Evaluation in high-dimensional MDPs*: The evaluation lacks depth, particularly concerning high-dimensional MDPs. MaxMin TD Learning is designed to enhance sample efficiency via exploration, yet it is compared against a standard $\\\\epsilon$-greedy strategy, which performs well given a larger interaction budget and appropriately tuned decay factors. Limiting the interaction budget significantly impacts $\\\\epsilon$ decay and policy performance, and it appears that the decay factor used here converges too rapidly to its minimum (see Q1). I would recommend including experiments with varying $\\\\epsilon$ values, especially in the 200-million-frame setting. Additionally, while the 100k benchmark used the NoisyNetworks exploration strategy, it was absent in the 200-million-frame experiments.\\n\\n* *Fair comparison*: A more balanced comparison would be to benchmark MaxMin TD Learning against alternative approaches designed to enhance sample efficiency as seen in [4, 5]. The authors could emphasize the benefit of MaxMin TD Learning, such as enabling effective learning without requiring prior logged data or a guide policy, which could potentially lead to distribution shifts.\\n\\n**General remarks**\\n\\n- In the phrase \\u201cThus, in high-dimensional complex MDPs\\u2026\\u201d, the citation of Kakade (2003) seems out of place, as deep reinforcement learning was developed later.\\n\\n- The second question raised saying that the goal is to achieve a *zero cost* experience collection seems infeasible in the context of exploration since interactions with the environment have an inherently associated cost. I think the authors suggest *zero additional cost*\\n\\n- I suggest having a single Reference section.\\n\\n**References**\\n\\n[1] Lan, Qingfeng, et al. \\\"Maxmin q-learning: Controlling the estimation bias of q-learning.\\\" arXiv preprint arXiv:2002.06487 (2020).\\n\\n[2] Agarwal, Rishabh, Dale Schuurmans, and Mohammad Norouzi. \\\"An optimistic perspective on offline reinforcement learning.\\\" International conference on machine learning. PMLR, 2020.\\n\\n[3] Moskovitz, Ted, et al. \\\"Tactical optimism and pessimism for deep reinforcement learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 12849-12863.\\n\\n[4] Yao, Yao, et al. \\\"Sample efficient reinforcement learning via model-ensemble exploration and exploitation.\\\" 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.\\n\\n[5] Uchendu, Ikechukwu, et al. \\\"Jump-start reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2023.\", \"questions\": \"Q1: Could the authors clarify the decay rate for the hyperparameter $\\\\textit{Exploration epsilon decay frame fraction}$? It seems that $\\\\epsilon$ decreases every 0.8% of the total interaction budget. How was this value selected?\", \"q2\": \"Could you provide more details on how NoisyNetworks was implemented in the experiments? Clarifying architecture choices and how this was selected would be useful, as it apparently allows for a range of configurations.\", \"q3\": \"In Figure 1, constant 2 appears to balance exploration and exploitation in the UCB algorithm. Has this constant been optimized for the problem, and if so, could the results for varying values be shown? I'd like to see results for values such as [0.5, 1, 2, 3] as done for the epsilon value.\", \"q4\": \"Could the authors clarify the intended interpretation of Figure 4 for the reader? What is exactly the meaning of TD error being stable or suffering from a drop during environment interactions? What are \\\"high\\\" negative or positive values in this case?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reply to the authors\", \"comment\": \"Dear Authors,\\n\\nThanks for your response to my concerns and questions. \\n\\nWhile I am aware of the underlying mechanism of canonical $\\\\epsilon$-greedy, I was more interested in knowing how exactly MaxMin TD is using $\\\\epsilon$. My question was whether the algorithm starts solely looking at the min values ($\\\\epsilon$ = 1.0) and then gradually transitions to solely optimizing the max value ($\\\\epsilon$ = 0) or keeps some min value optimization intact ($\\\\epsilon$ = 0.01). Unfortunately, my question was not directly answered. Also, in the last part of the question, I intend to know more about $e$ which I believe is the rollout.\\n\\nI appreciate that you wanted to put some empirical analysis in the paper, however, as a proponent of an approach that aims to improve performance through better exploration, you should consider adding at least another benchmark that is designed to assess exploration such as MiniGrid [1], Crafter [2]. \\n\\nI would say mentioning DDQN in the caption of Figure 5 will greatly facilitate the reader. Also, the discussion in lines 411-415, seems more tailored towards Figure 2. \\n\\nI would update my review including these suggestions, however, I would like to keep my current score. \\n\\n[1] Chevalier-Boisvert, Maxime, et al. \\\"Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[2] Hafner, Danijar. \\\"Benchmarking the Spectrum of Agent Capabilities.\\\" International Conference on Learning Representations (2022).\"}",
"{\"comment\": \"Thank you for your response. I will keep my score.\"}",
"{\"title\": \"Author Response\", \"comment\": \"---\\n\\n**1.** *\\u201cPlease do add that Fig. 5 is based on Double DQN in the caption.\\u201d*\\n\\n---\\n\\nWe will add the DDQN in the caption of Figure 5.\\n\\n---\\n\\n**2.** *\\u201cCould you clarify how Fig. 4 shows this?\\u201d*\\n\\n---\\n\\nIn Figure 4, the solid lines report temporal difference for MaxMin TD, and the dashed lines report temporal difference for $\\\\epsilon$-greedy, where the different colors correspond to different games. One can see in Figure 4 that the solid lines of each color are consistently above the dashed lines of the same color, indicating that TD is consistently higher for MaxMin TD than for the baseline methods throughout training. This demonstrates that the theoretical analysis provided in our paper, which shows that MaxMin TD learning will increase temporal difference, holds throughout training.\\n\\n---\\n\\n**3.** *\\u201cAtari 200 million\\u201d*\\n\\n---\\n\\nBelow find the table reporting the area under the curve for 200 million frames training. Furthermore, note that MaxMin TD increases performance across all the games. Thus, the results demonstrate that MaxMin TD learning indeed increases the performance also in 200 million frame training. \\n\\n\\n| Games | MaxMin AUC | $\\\\epsilon$-greedy AUC |\\n|------------------------|-------------------------------|----------------------------------|\\n| BankHeist | **164851.1469** | 134149.1993971 | \\n| StarGunner | **8161724.015** | 7753506.86057 |\\n| Surround | **-1448.29515** | -1556.27495522 |\\n| Gravitar | **28227.7661** | 24133.36946 |\\n| Tennis | **-1013.6794** | -1054.38959 |\\n| Amidar | **204763.677** | 195743.7222 |\\n| JamesBond | **104197.2978** | 89518.6469 |\\n| Bowling | **8556.487** | 6980.375 |\\n\\n---\\n\\n**4.** *\\u201dSurround and Gravitar\\u201d*\\n\\n---\\n\\nSurround is part of the Arcade Learning Environment benchmark, and the random policy score in Surround is -10.0 and the human score is 5.4. Thus the human normalized score in Surround for MaxMin TD learning is 21.7459% while human normalized score for baseline method is 3.62195%. Thus, MaxMin TD learning in fact achieves the $7\\\\times$ the human normalized score of baseline methods.\\n\\nThe baseline model is Double DQN and this is also explained in Line 372. The score achieved by Double DQN in Gravitar is 170.50 [1]. As reported by the original papers themselves the score achieved by DQN is 306.67. The results are not at all significantly below the known results. Please check the original papers that originally proposed these algorithms.\\n\\n[1] Hado van Hasselt and Arthur Guez and David Silver. Deep Reinforcement Learning with Double Q-learning, AAAI 2016.\\n\\n[2] Human-level control through deep reinforcement learning, Nature 2015.\\n\\n---\\n\\n**5.** *\\u201dThis unfortunately did not address my question. You mentioned: \\\"If one is seriously concerned about contrived counterexample environments one can inject slight noise directly to MaxMin TD and immediately resolve the example.\\\" However, the question was: could you give me a solid reason not to be worried about having to inject additional noise?\\u201d*\\n\\n---\\n\\nThe results in high dimensional state observation MDPs across the entire benchmark with various algorithms indicate the contrived counter example is not an issue. The empirical analysis throughout the paper demonstrates that MaxMin TD learning not only converges, it obtains substantially higher scores. \\n\\n---\\n\\n**6.** *\\u201dWhen we discuss performance improvements, it is very important to settle what we mean by performance first. It could be the mean performance at 200M frames, mean performance over the last 5M frames, the Area-Under-the-Curve (AUC), and so on. By truncating the X-axis at different points and using that performance for such assessments is simply scientifically incorrect.\\u201d*\\n\\n---\\n\\nPlease see response to item 2.\\n\\n---\\n\\n**7.** *\\u201dMy grounding was the same as that you asserted. But could you clarify why an algorithm for action-selection should be categorized as a TD method?\\u201d*\\n\\n---\\n\\nThe name of our algorithm is MaxMin TD learning because our algorithm maximizes the TD in every transition by minimizing the state-action value function, and furthermore we do not impose that our algorithm should be classified as a TD method. If the name of our algorithm causes a confusion for all we would be happy to rephrase the name of our algorithm.\"}",
"{\"summary\": \"This paper argues for an alternative exploration strategy to $\\\\epsilon$-greedy for deep/approximate value-based RL methods (mostly those founded on Q-learning) in which instead of sampling uniformly at random with probability $\\\\epsilon$, their approach samples actions based on $min_a Q(s, a)$. Algorithmically, the TD update rule of the basis algorithm remains intact. The authors argue that experiences generated by this method of acting have meaningful implications during experience replay/consolidation by TD methods (in particular, deep Q-learning family of algorithms). As such, the authors frame their proposal as a TD approach, in what they call MaxMin TD learning.\\n\\nThey examine the learning performance on a few tasks of the Atari suite (200M frames), full set of the Atari 100K benchmark, and an illustrative Chain MDP task. The key Atari results are based on the combination of their MaxMin TD learning with QRDQN (a distributional RL algorithm) in comparison with QRDQN with $\\\\epsilon$-greedy, where MaxMin TD variant achieves higher AUC on both the Median and 80th Percentile aggregate measures.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"**Interesting problem scenario:**\\nConsidering exploration strategies that are directly useful for structural/temporal credit assignment is an interesting area to focus on in approximate/deep RL.\\n\\n**Analysis tools around acting uniformly vs. Q-minimizing actions after parameter initialization:**\\nI found the approach of the propositions to analyze the impact of acting uniformly vs. taking the Q-minimizing action on the TD error to be interesting.\\n\\n**Experimental testbeds:** \\nThe choice of testbeds, ranging from a toy MDP problem to 100K and 200M Atari benchmarks is reasonable. \\n\\n**Evaluation metrics:**\\nReporting Median and 80% aggregate measures, using 5 seeds per each method in Atari for DQN-based methods, and reporting standard error of the mean return are all reasonable choices. However, statistical measures introduced by Agrawal et al. (2021) [arXiv:2108.13264] would have been a step up.\", \"weaknesses\": [\"**Framing the approach as a TD method, as opposed to an exploration strategy:**\", \"Framing of the approach as a TD algorithm is not justifiable. A strategic exploration approach could facilitate credit assignment, but categorizing them as a TD approach is rarely ever useful in my view. The proposed method only touches experience generation and not experience consolidation and in this way, I see it as best described as a behavior/exploration strategy. Also, the baselines in question are Noisy Nets and $\\\\epsilon$-greedy, which are both known as exploration/behavior strategies.\", \"**Propositions and proofs do not deliver** a full picture of what's going on, unlike the claims for theoretical foundations on par with those existing in tabular settings.\", \"The approach of only choosing actions from $max Q$ and $min Q$ can easily be shown to introduce bias in a simple counterexample. Say in a multiarmed bandit, action *a* is initialized to the minimal value at random (wrt. to the other initialized actions' values) but as it happens it's *true* Q value is lower than the initialized value. Let's assume also that action *b* is initialized to the maximal value at random (wrt. to the other initialized actions' values) and its corresponding *true* action value is higher than all other initialized actions' values. Note that, even if we use functional approximation (e.g. a neural network) to solve this problem, with parameters shared between the Q estimators for the various actions, it can easily end up being the case that no other actions are experienced during the course of training interactions. This would hold even despite the fact that neither of actions *a* and *b* would be the Q-minimizing or the Q-maximizing actions, respectively, wrt. the *true* Q function.\", \"Consider the example above again: The TD error reaches zero/near-zero for the Q-minimizing and Q-maximizing actions after a few updates (where Q is the current estimator and not the true Q-function). However, all other actions will have a much higher TD error since no updates is performed on them. This effectively shows that while the results of the Propositions in the paper could hold probabilistically assuming uniform initialization of the outputs of the Q-estimator network, they do not hold on a case by case setting nor do would they hold after several updates (after the uniformity of the outputs is no longer the case).\", \"Results on Atari 100K are significant, but not on Atari 200M experiments (especially given the fact that the plots for the latter are truncated earlier than 200M frames; e.g. StarGunner is truncated at 70M frames). This likely has ties to my argument above. MaxMin TD could help at the beginning of training (assuming settings like my counterexample occur less commonly in practice / in these tasks), but would not be able to reach higher final performances after enough training of a good baseline on the task.\", \"I think any benefit emerging from MaxMin TD could have ties to epistemic uncertainty minimization. I think discussions, detailed analysis, and comparisons with approaches directly purposed to do so (incl. bootstrapped DQN of Osband et al., 2016) would have been beneficial.\"], \"questions\": \"1. Section 5 suggests that experiments were also done with Double DQN (with PER), however all I could find were learning curves for QRDQN, and Table 1 with results for DDQN. Generally, figuring out what results are based on which basis algorithm was challenging as I had to go back to the text several times without success. Could you please clarify what basis agent/algorithm each plot/table is based on. E.g. Figure 3's caption says \\\"MaxMin TD Learning and canonical temporal difference learning in [ALE]...\\\". The canonical TD learning method actually implies more of the TD($\\\\lambda$) class of methods for prediction than a deep RL method such as DDQN or QRDQN. So please specify.\\n\\n2. Why are the curves for Atari 200M truncated in some cases? (Could be beneficial to add the performance curves for the full length of the experiments.)\\n\\n3. What was the reasoning behind choosing the specific subset of games for the Atari 200M experiments. \\n\\n4. Can you comment on the counterexample that I've mentioned in the \\\"Weaknesses\\\" section? What is your view on it? (Perhaps experimenting with such a setting would be useful.)\", \"minor_suggestions\": [\"Line 87: \\\"MDP [...] contains continuous set of states\\\"; I believe this intro is incorrect and also not applicable to the setting of this paper. In Atari and Chain MDP, states are in fact discrete. In Atari, pixel values are discrete, yielding a discrete combinatorial set of states.\", \"Line 89: The definition corresponds to the *expected* reward function.\", \"Line 90: The PMF-based definition of the policy does not hold fully for the continuous-state definition of the MDP. But this will be fine if Line 87 is changed to discrete set of states.\", \"Line 102: I believe the second expectation is mis-specified and in fact is not needed.\", \"Line 108: \\\"In deep reinforcement learning, the state space or the action space is large enough that it is not possible to learn and\", \"store the state-action values in a tabular form.\\\"; state-action spaces being large is not a property of DRL. I think better phrasing would be in this line: domains often tackled with DRL tend to have large state and/or action spaces.\", \"Definition 3.3 seems to be formalized such that $\\\\theta$ is a random variable of the expectation, but the wording seems to imply that $Q_\\\\theta$ is a given.\", \"Would be good to have a visualization of the Chain MDP for ease of readability. Also, what was the number of states $N$?\", \"Number of environment interactions are not equal to the number of frames in Atari 2600 tasks, because of frame skipping of > 1 used. As such, the X axis labels should change to number of frames.\", \"The proposed approach is only compatible with discrete-action Q-based methods. That is to say, methods like DDPG cannot utilize it. I think it would be good to mention this somewhere.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"metareview\": \"This paper introduces a counterintuitive notion that instead of exploring uniformly randomly, exploring with minimum-value action helps enlarge the temporal difference error and hence benefits learning. Justification in terms of theory is given. The approach can be well formulated in terms of max-min optimization. One issue I find is that following the minimum-value action might be quite harmful in some cases.\\n\\nThe learning part of the pseudocode in Algorithm 1 is confusing. I believe the updates are made simply based on the drawn samples. The statement \\u201cTD receives update with probability \\u03f5:\\u201d serves as an explanation of what\\u2019s happening instead of an if-else mechanism that needs to be implemented, an impression of which is created by this block of statement and it may confuse readers. This explanation should be moved out of the pseudocode.\\n\\nThe implication of their results in the asymptotic sense and comparison to a range of baseline exploration strategies and benchmark tasks would make the algorithm much stronger and acceptable in the next submission.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers were mixed about the work. While they appreciated the counterintuitive nature of the result, the work overall seems rather incomplete, some of which I already pointed out above. For example, a reviewer brought up the issue of potential harm in taking minimum-value actions. I found the response by the authors rather confusing: \\u201cOur method is designed for the deep reinforcement learning setting, rather than the online setting where every action may be associated with a cost, i.e. a large negative reward, and regret is the key metric. Thus, while minimizing regret online is a very important goal, both in theory and practice, it is not the objective of the significant body of work in deep reinforcement learning that our paper is a part of.\\u201d Deep RL can be and is used online. There is no agreed-upon notion that deep RL is mainly concerned with being applicable in manipulable games. This concern should be addressed by acknowledging that it can potentially be highly problematic.\"}",
"{\"summary\": \"This work considers the problem of experience collection through behavior policy in RL. They argue the benefit of leveraging extremum actions to learn optimal policy and outlined an algorithm that collects and uses such samples. They theoretically show that how actions with minimum value could be helpful. Experimental validation of the approach has been conducted using the Atari game environment.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. In my view, the main strength of this work lies in the presented theoretical assessment. It shows that the minimum-action value leads to higher temporal difference (TD) than random actions and the difference in the TD is equal to the disadvantage gap. Such finding reveals the underlying importance of the bad actions that may help in accelerate the learning which if often ignored.\\n\\n2. This work nicely formalizes and defines the relevant concepts, and then gradually presents the core propositions. I have found the paper easy to follow. Also, the detailing of the propositions for both single and double Q-learning is helpful for the reader.\", \"weaknesses\": \"1. While the paper presents several experimental results on ALE, it lacks experiments across different benchmarks. It needs rigorous validation to uphold the claim. I would suggest adding more experiments on other benchmarks that are specially designed to assess exploration such as MiniGrid [1], Crafter [2].\\n\\n2. Comparison with more recent and effective exploration techniques is missing. It would be interesting to see comparisons with Random Network Distillation [3] or curiosity-driven approaches [4] (which may require some adaptation).\\n\\n3. Some part of the writing needs improvement. For example, \\n - add more technical clarity such as in lines 291-292, please elaborate on what you intend to mean by \\\"solely due to the experience collection\\\". \\n - minor grammatical issues such as \\\"a fundamental theoretically well-motivated\\\" -> \\\"a fundamental and theoretically well-motivated\\\" OR \\\"a fundamental, theoretically well-motivated\\\". \\n - it is very hard to identify how Figures 2 and 5 differ. It would be helpful to the reader if you would add key information (the underlying architecture in this case) in the caption.\\n\\n[1] Chevalier-Boisvert, Maxime, et al. \\\"Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[2] Hafner, Danijar. \\\"Benchmarking the Spectrum of Agent Capabilities.\\\" International Conference on Learning Representations (2022).\\n\\n[3] Yuri Burda, Harrison Edwards, Amos Storkey, Oleg Klimov. \\\"Exploration by random network distillation\\\", International Conference on Learning Representations (2019).\\n\\n[4] Pathak, Deepak, et al. \\\"Curiosity-driven exploration by self-supervised prediction.\\\" International Conference on Machine Learning (2017).\", \"questions\": \"1. It has been mentioned that \\\"minimizing the state-action value function in early training ...\\\". Does the algorithm considers actions with minimum value \\\"only\\\" in early training and does the value of $\\\\epsilon$ in Algorithm 1 gradually reaches to zero? What is $e$ in algorithm 1?\\n\\n2. Is there any motivating factor to cluster the games in figure 4 or is it just because of the value range?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors present a new algorithm which explores by choosing the worst action as estimated by the neural q function. They demonstrate the efficacy of this in low data regimes for double DWN and compare it to vanilla epsilon greedy based double DQN.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper provides a good set of experimental results in the low data regime. The method requires fairly simple changes to existing algorithms and it tends to improve performance while being so. The paper is largely well written and I was able to follow along easily.\", \"weaknesses\": \"I see a few important issues that need addressing before I can raise my score.\\n\\nIn **Proposition 3.4 and 3.6** you start with statements for state $s_t$, which is random variable corresponding to state at time $t$ but in the inequality on the RHS you somehow have $\\\\mathcal D(s)$ for a fixed state $s$. I am unclear as to where this $s$ is coming from. Moreover, I believe this would significantly complicate the proofs because you will have to account for the time step or \\\"loosen\\\" the lower bound because you will have to take some kind of infimum.\\n\\nWhile I am able to follow intuitively why you would benefit from taking the value minimizing action, I believe you should also include a **comment on the estimated regret** for this choice. It might be beneficial for a randomly initialized $Q$-function in a game setting but we must consider cases where large negative rewards are \\\"harmful\\\" to the agent. This is one of the reasons why minimizing regret is important to both theoreticians and practitioners.\", \"in_the_experimental_section_i_would_be_interested_to_see_comparison_with_two_other_papers\": \"\\\"Exploration with random network distillation\\\" and \\\"Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning\\\". Former is based on quantifying how novel a data point is and the latter is directly related to optimistically choosing an action based on pseudo counts. You refer to the inability of doing count based exploration (as done in tabular setting) in your paper but these works are doing some form of counts based exploration. For reference, in lines 126-128 you write\\n>> incorporating these count-based methods in high-dimensional state representation MDPs requires substantial complexity including training additional deep neural networks to estimate counts or other uncertainty metrics\\n\\nI would expect some comparison to how much better these more complex methods are.\", \"questions\": \"I see the following minor issues and make some auggestions:\", \"line_50\": \"I wouldn't call $\\\\epsilon$-greedy \\\"naive and standard technique\\\"\", \"line_96\": \"comma at the end, not a full stop\", \"line_113\": \"I believe you could use different subscript for $\\\\theta$ to differentiate the gradient step from the environment time step.\", \"line_123\": \"full stop at the end of eqn\", \"line_124\": \"\\\"a family of algorithms have been proposed based on counting state visitations\\\" what are these algorithms? I would strongly recommend citing \\\"R-max \\u2013 A General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning\\\" and \\\"Reinforcement Learning in Finite MDPs: PAC Analysis\\\" here.\", \"line_155\": \"Why is there an $s' \\\\sim \\\\mathcal T(s, \\\\hat{a})$ in the first expectation? I don't see any dependence on $s'$ in the term inside the bracket. Same question for later versions of smoothness too.\\n\\nProof of Proposition 3.4: could you expand on the second inequality please?\", \"line_264\": \"I am unclear on what is the \\\"information gained\\\" here? Is it in an information theoretic sense or in terms of optimizaing the loss?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your response.\\n\\n\\nPlease observe that the paper [2] you reference compares QRDQN to their method (REM) as well as several other ensemble methods including bootstrapped DQN. Figure 5 of [2] plots the learning curves in the online setting for QRDQN, REM and bootstrapped DQN. The results reported in Figure 5 of [2] demonstrate that estimating the value-function distribution as QR-DQN does gives equivalent performance to estimating the value distribution via ensemble methods. \\n\\nThe results reported in Figure 5 of [2] demonstrates that QRDQN performs identically to REM, and in our paper we indeed report results in the top performing algorithm, i.e. QRDQN, and these results reported in Figure 2 of our paper demonstrate that MaxMin TD learning substantially improves performance in the QRDQN case as well.\\n\\n Furthermore, please see the supplementary material for the consolidated UCB results.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for stating that our paper reveals the underlying important components that accelerate learning, and nicely formalizes and defines the relevant concepts while providing a theoretical assessment, and then gradually presents the core propositions, and thank you for preparing a well-thought out review.\\n\\n---\\n\\n**1.** *\\u201cIt has been mentioned that \\\"minimizing the state-action value function in early training ...\\\". Does the algorithm considers actions with minimum value \\\"only\\\" in early training and does the value of $\\\\epsilon$ in Algorithm 1 gradually reaches to zero? What is $\\\\epsilon$ in algorithm 1?\\u201d*\\n\\n---\\n\\nThe values of $\\\\epsilon$ are reported in the supplementary material. The values for $\\\\epsilon$ are set to the exact same values with prior work to provide consistent and transparent comparison. Indeed, $\\\\epsilon$ decay is a standard technique that gradually decreases the value of $\\\\epsilon$.\\n\\n---\\n\\n**2.** *\\u201cIs there any motivating factor to cluster the games in figure 4 or is it just because of the value range?\\u201d*\\n\\n---\\n\\nYes, indeed the clustering of the games in the graphs of Figure 4 is solely due to the value range and space efficiency. \\n\\n---\\n\\n**3.** *\\\"While the paper presents several experimental results on ALE, it lacks experiments across different benchmarks. It needs rigorous validation to uphold the claim.\\\"*\\n\\n---\\n\\nWe wanted to just briefly leave here that our paper provides empirical analysis in the low data regime across the entire Arcade Learning Environment benchmark and in the high data regime of Atari with Double DQN and QRDQN in the major canonical benchmark of deep reinforcement learning, and compares against the canonical methods $\\\\epsilon$ greedy and NoisyNetworks while also providing results for count based methods, i.e. UCB in the Chain MDP. \\n\\n---\\n\\n**4.** *\\\"Some part of the writing needs improvement. For example, it is very hard to identify how figure 2 and 5 differ.\\u201d*\\n\\n---\\n\\nFigure 2 reports results for QRDQN and Figure 5 reports results for DDQN. This is also explained in Line 411 and 415. But we can indeed further refer to them again to provide a better readability.\"}",
"{\"title\": \"Author Response Part II\", \"comment\": \"---\\n\\n**7.** *\\u201dFraming the approach as a TD method, as opposed to an exploration strategy: Framing of the approach as a TD algorithm is not justifiable. A strategic exploration approach could facilitate credit assignment, but categorizing them as a TD approach is rarely ever useful in my view. The proposed method only touches experience generation and not experience consolidation and in this way, I see it as best described as a behavior/exploration strategy. Also, the baselines in question are Noisy Nets and \\u03f5-greedy, which are both known as exploration/behavior strategies.\\u201d*\\n\\n---\\n\\nThe naming of our method is intended to capture the core intuition of our algorithm in which MaxMin TD learning increases the temporal difference for each transition.\\n\\n---\\n\\n**8.** *\\u201dPropositions and proofs do not deliver a full picture of what's going on, unlike the claims for theoretical foundations on par with those existing in tabular settings.\\u201d*\\n\\n---\\n\\nThe propositions and proofs in our paper provide a theoretical analysis and justification for our proposed method MaxMin TD learning in which MaxMin TD learning selects transitions with higher temporal difference. Our empirical analysis in the the entire ALE 100K benchmark across different algorithms confirms this theoretical analysis and these mathematical predictions that indeed MaxMin TD learning does have higher temporal difference throughout the entire training, and that MaxMin TD learning indeed learns much faster. We do not anywhere in the paper mention that we provide theoretical foundations for the tabular setting, rather we mention that due to the concerns and the strong assumptions of the tabular settings, the tabular methods do not scale to deep reinforcement learning.\\n\\n---\\n\\n**9.** *\\u201cI think any benefit emerging from MaxMin TD could have ties to epistemic uncertainty minimization. I think discussions, detailed analysis, and comparisons with approaches directly purposed to do so (incl. bootstrapped DQN of Osband et al., 2016) would have been beneficial.\\u201d*\\n\\n---\\n\\nWe do not see any clear connection to epistemic uncertainty minimization as in bootstrapped DQN. \\nBootstrapped DQN and related methods seek to maintain a distribution over value function estimates to explicitly learn and represent epistemic uncertainty about the true values. \\nOur method does not employ additional deep neural networks to measure any sort of uncertainty, our method is based on increasing the temporal difference.\\n\\n\\n\\n---\\n\\n**10.** *\\u201dNumber of environment interactions are not equal to the number of frames in Atari 2600 tasks, because of frame skipping of > 1 used. As such, the X axis labels should change to number of frames.\\u201d*\\n\\n---\\n\\nThe x-axis reported is indeed correct. Indeed the number of environment interactions are not equal to the number of frames due to standard frame stacking. Please see the discussion in number of frames and number of environment interactions in Atari 2600 tasks in [1]. In particular please see Page 8 first paragraph, and furthermore please see Figure 3 in Page 9.\\n\\n[1] Hado van Hasselt, Matteo Hessel, John Aslanides. When to use parametric models in reinforcement learning?, NeurIPS 2019.\"}"
]
} |
13PclvlVBa | EEGMamba: Bidirectional State Space Model with Mixture of Experts for EEG Multi-task Classification | [
"Yiyu Gui",
"Mingzhi Chen",
"Guibo Luo",
"Yuchao Yang"
] | In recent years, with the development of deep learning, electroencephalogram (EEG) classification networks have achieved certain progress. Transformer-based models can perform well in capturing long-term dependencies in EEG signals. However, their quadratic computational complexity poses a substantial computational challenge. Moreover, most EEG classification models are only suitable for single tasks and struggle with generalization across different tasks, particularly when faced with variations in signal length and channel count. In this paper, we introduce EEGMamba, the first universal EEG classification network to truly implement multi-task learning for EEG applications. EEGMamba seamlessly integrates the Spatio-Temporal-Adaptive (ST-Adaptive) module, bidirectional Mamba, and Mixture of Experts (MoE) into a unified framework. The proposed ST-Adaptive module performs unified feature extraction on EEG signals of different lengths and channel counts through spatial-adaptive convolution and incorporates a class token to achieve temporal-adaptability. Moreover, we design a bidirectional Mamba particularly suitable for EEG signals for further feature extraction, balancing high accuracy, fast inference speed, and efficient memory-usage in processing long EEG signals. To enhance the processing of EEG data across multiple tasks, we introduce task-aware MoE with a universal expert, effectively capturing both differences and commonalities among EEG data from different tasks. We evaluate our model on eight publicly available EEG datasets, and the experimental results demonstrate its superior performance in four types of tasks: seizure detection, emotion recognition, sleep stage classification, and motor imagery. The code is set to be released soon. | [
"EEG Classification",
"State Space Models",
"Mixture of Experts",
"Brain-Computer Interfaces"
] | Reject | https://openreview.net/pdf?id=13PclvlVBa | https://openreview.net/forum?id=13PclvlVBa | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vD1Q5C3cqL",
"tEAzBXWPjt",
"rBXfPYJD0t",
"nZqW39dJmu",
"n21Hb8Uy94",
"mlKYa12Rig",
"lWhtjgeLU8",
"e5b0Twjq6t",
"cSvtdb8WKE",
"a22XzeBoNe",
"XNiZGyTGYA",
"T7JZDSmgkf",
"S37ueq1fJp",
"OfXKKdFfbC",
"O8seF0IL8q",
"MSViJD2qTK",
"KIXS80nfDo",
"INgNpvM3UL",
"Fg1yYAfvCq",
"F8YfTXItzt",
"DnZ4ZORXGR",
"BjxQsdMUeo",
"AMlh1KgKpv",
"9g2IYMuut5",
"9e158ulgQk",
"4MQ1Ah8I6S",
"3xVXOSsrCR",
"3JX50LbVSn"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1732364532161,
1732354331220,
1732352690392,
1732353866936,
1732352920780,
1732350965180,
1733132177168,
1734376189134,
1729515470698,
1732354046094,
1732352302105,
1733130222563,
1732350539780,
1729863730434,
1730716116266,
1732353102051,
1730616667008,
1733067792279,
1733133301708,
1732353398688,
1732350418459,
1732358331267,
1732352477753,
1737524053693,
1733042009397,
1732523705198,
1732529758215,
1730432077226
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_xgg9"
],
[
"ICLR.cc/2025/Conference/Submission10441/Area_Chair_vczE"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_SRQL"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_uVYA"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_i9VP"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_xgg9"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_wapT"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_wapT"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_uVYA"
],
[
"ICLR.cc/2025/Conference/Submission10441/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10441/Reviewer_wapT"
]
],
"structured_content_str": [
"{\"title\": \"Thanks\", \"comment\": \"Thank you for your response. We still have a few points that need clarification:\\n1. We have added results comparing with LaBraM in our previous reply and included this baseline in the modified PDF. As shown, our EEGMamba outperforms LaBraM across the board. Our current baseline includes both BIOT and LaBraM models, which is consistent with [1] and sufficient to demonstrate the performance of the model. \\n\\n2. In fact, before the foundation models, most models were tested on only one or a few tasks. Therefore, when selecting baselines, we tend to choose models with superior performance, and experiments have proven that AttnSleep's performance is comparable to that of the newly proposed models. Additionally, we observed that [2][3] used ST-Transformer as a baseline for comparing tasks like epilepsy detection, even though the original ST-Transformer paper only tested it on motor imagery tasks. This indirectly supports the reasonableness of our approach. \\n\\n3. Justification for Choosing EEGNet as a Baseline: When selecting baselines for comparison, we consider not only their novelty but also their performance and significance. EEGNet is included in our paper for two key reasons: \\n- It is a CNN-based model, whereas most new models incorporate Transformer, and we mentioned in the paper that the Mamba architecture, when applied to EEG classification, demonstrates the advantages of CNNs over Transformers. Therefore, it is necessary to include a CNN-based model as a baseline. \\n- Although EEGNet was published earlier, its performance is indisputable, and many later models still struggle to outperform it comprehensively. For example, [4] shows that EEGNet outperforms LaBraM across 12 datasets from five different tasks. \\n\\n4. As indicated in our paper's title, ***EEG**Mamba: Bidirectional State Space Model with Mixture of Experts for EEG Multi-task Classification*, our focus is on applying the proposed model to EEG. Most works tend to distinguish these two modalities [1][2][3][4]. For example, [3] compared only with BIOT and did not compare with the earlier BrainBERT model. If you're interested in exploring Mamba's application in SEEG or other physiological signals, we can discuss it in future work. \\n\\n5. Using training time as an evaluation metric seems rather uncommon, since it could be subject to too many uncertainties. In contrast, inference time is directly related to inference speed, and we have presented quantitative results on inference speed in Figure 4 of our manuscript.\\n\\n[1] Jiang, W. B., Wang, Y., Lu, B. L., & Li, D. (2024). NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals. arXiv preprint arXiv:2409.00101. \\n[2] Yang C, Westover M, Sun J. Biot: Biosignal transformer for cross-data learning in the wild[J]. Advances in Neural Information Processing Systems, 2024, 36. \\n[3] Jiang, W. B., Zhao, L. M., & Lu, B. L. (2024). Large brain model for learning generic representations with tremendous EEG data in BCI. arXiv preprint arXiv:2405.18765. \\n[4] Yue T, Xue S, Gao X, et al. EEGPT: Unleashing the Potential of EEG Generalist Foundation Model by Autoregressive Pre-training[J]. arXiv preprint arXiv:2410.19779, 2024.\"}",
"{\"title\": \"Reply to Q1-Q3\", \"comment\": \"**Q1: Could you please specify how the EEG tasks are encoded into task tokens?**\\n**A1:** We sincerely apologize for any confusion caused by the unclear presentation. The task token is pre-assigned to each task before model training begins. When the EEG feature tokens pass through the task-aware gate, both the task token and the EEG feature tokens are both involved in the computation of the task-aware gate. \\n\\n**Q2: I noticed that the DEAP dataset was utilized in this study, but only data from four electrodes were selected\\u2026 What is the reasoning for selecting such unusually long data lengths?** \\n**A2:** As described in Appendix D5, we selected four channels for classification based on the approach outlined in [1], where grid search was used to achieve similar results. More detailed information on this procedure can be found directly in [1]. \\nThank you for pointing out that our manuscript omitted a description of the emotional evaluation dimension used. Specifically, we employed valence as the indicator for binary classification, and we will include this information in the revised version of the paper. \\nRegarding the selection of duration, we found that previous literature does not provide direct evidence supporting the choice of 1s, 2s, or 4s as the optimal segment lengths [2][3]. It is likely that these choices were made due to limitations in Transformer-based models, which struggled to classify longer EEG sequences due to constrained computational resources. \\n\\n[1] Khateeb M, Anwar S M, Alnowami M. Multi-domain feature fusion for emotion classification using DEAP dataset[J]. IEEE Access, 2021, 9: 12134-12142. \\n[2] Song Y, Zheng Q, Liu B, et al. EEG conformer: Convolutional transformer for EEG decoding and visualization[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022, 31: 710-719. \\n[3] Jim\\u00e9nez-Guarneros M, Fuentes-Pineda G. Cross-subject EEG-based emotion recognition via semi-supervised multi-source joint distribution adaptation[J]. IEEE Transactions on Instrumentation and Measurement, 2023. \\n\\n**Q3: This work employs five-fold cross-validation for data partitioning, which does not appear to be a commonly used EEG dataset partitioning method. What is the rationale or basis for this choice?** \\n**A3:** We apologize for any confusion, but as far as we know, subject-wise cross-validation is a widely used method for partitioning EEG datasets, with common approaches including five-fold, ten-fold, or even twenty-fold cross-validation. References supporting this include [1][2][3][4][5]. We are curious to know the rationale behind your statement that this is an uncommon form of data partitioning. \\n\\n[1] Eldele, E., Chen, Z., Liu, C., Wu, M., Kwoh, C. K., Li, X., & Guan, C. (2021). An attention-based deep learning approach for sleep stage classification with single-channel EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 809-818. \\n[2] Yuan Z, Zhang D, Chen J, et al. Brant-2: Foundation Model for Brain Signals[J]. arXiv preprint arXiv:2402.10251, 2024. \\n[3] Ji Y, Li F, Fu B, et al. A novel hybrid decoding neural network for EEG signal representation[J]. Pattern Recognition, 2024, 155: 110726. \\n[4] Lawhern V J, Solon A J, Waytowich N R, et al. EEGNet: a compact convolutional neural network for EEG-based brain\\u2013computer interfaces[J]. Journal of neural engineering, 2018, 15(5): 056013. \\n[5] Zhang Z, Zhong S, Liu Y. TorchEEGEMO: A deep learning toolbox towards EEG-based emotion recognition[J]. Expert Systems with Applications, 2024, 249: 123550.\"}",
"{\"title\": \"Reply to Q3\", \"comment\": \"**Q3. section 3.2 Data Division:**\\n**Q3.a: Subject Variability Impact.** \\n**A3.a:** You are correct that subject variability is a significant challenge in BCI tasks, particularly in motor imagery. The lower performance on the BCI-IV-2a and SEED datasets, as compared to seizure detection and sleep stage classification, is indeed due to the high subject variability in these tasks. While the performance on these specific datasets may not reach the levels seen in subject-specific models, it is important to note that EEGMamba is trained to to work across various subjects without the need for individual calibration. We believe this is a valuable property for clinical applications where subject-specific models may not be feasible. \\n\\n**Q3.b: Performance Discrepancy with Benchmark Models.** \\n**A3.b:** The performance drop of the EEG Conformer and other benchmark models when evaluated using a subject-split approach highlights the sensitivity of these models to subject variability and the challenges of subject transfer. EEGMamba, with its multi-task learning approach, is designed to be more robust to such variability. \\nWe supplemented EEGMamba\\u2019s performance on BCI-IV-2a in the case of subject-specific training, and the standard deviation in the experiment was derived from five different random numbers. It should be noted that the results in EEGConformer were obtained through data augmentation techniques of segmentation and reconstruction, while all the experimental results in EEGMamba\\u2019s paper did not carry out similar data augmentation techniques. And in order to maintain unity with the original manuscript, we did not do this in our experiment. \\n| methods | s01 | s02 | s03 | s04 | s05 | s06 | s07 | s08 | s09 | average |\\n|:--------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-------:|\\n| EEGMamba | 80.35 | 64.18 | 85.38 | 74.89 | 75.53 | 61.35 | 87.37 | 84.11 | 74.54 | 76.41 |\\n\\nWhile we cannot guarantee that EEGMamba would match subject-specific models like EEG Conformer when evaluated on a single subject, our model\\u2019s strength lies in its ability to generalize across subjects. We acknowledge the need for further investigation into how EEGMamba performs in a subject-specific setting and will include comparative analyses in future work. \\n\\n**Q3.c: Use of Separate Test Sets.** \\n**A3.c:** We apologize for the lack of clarity regarding the use of separate test sets. In our experiments, we did not use the official training set and test set provided by BCI-IV-2a for final model training and evaluation. The reason for this is because we want the main results to have a common experimental setting and show cross-subject generalization of the model, so we use a subject-split setting. As mentioned in A3.b, we supplemented the experimental results obtained using the official training set and test set of BCI-IV-2a. We will clarify this in the revised manuscript to ensure transparency and to facilitate fair comparisons with previous work.\"}",
"{\"title\": \"Reply to W1-W2\", \"comment\": \"Thank you for your detailed and thoughtful review of our manuscript. We value your comments and suggestions, and we are grateful for the opportunity to address them. In this response, we will provide further clarification and empirical evidence to address the concerns you raised. Here is our detailed response:\\n\\n**W1: The generalizability and flexibility of the current method in handling unforeseen tasks with different channel configurations.** \\nOur model can be extended to new datasets after training. We can describe the specific extension from two different situations as follows: \\n- When the newly added dataset has the same number of tasks, channels and classes as the previous dataset, we can directly use the task index of the original dataset. For example, the two datasets SleepEDF20 and SHHS used in the manuscript experiment can be replaced with each other, that is, if you want to apply the model trained on SleepEDF20 to SHHS, you only need to encode the same task and make a few epochs of fine-tuning. \\n- When the newly added dataset cannot meet the conditions in 1, we need to give it a new task number and pre-set its number of channels and classes, and then a few epochs of fine-tuning. This is broadly similar to what most current foundation models do, except that we need to pre-set the task number and the number of channels. \\n\\nWe use the Confused student EEG brainwave data [1] (hereinafter referred to as Confused EEG), which is a completely new task for EEGMamba. We applied the existing weights (i.e., the weights corresponding to Table 2-5 in the manuscript) to the Confused EEG, using 4 different random numbers to obtain a 7:3 training-test set ratio, similar to the approach in [3] (while [2] used five-fold cross-validation). We trained for 10 epochs each time and averaged the results. The test results are shown as follows: \\n\\n| Classification Network | [2], 2019 | [3], 2023 | EEGMamba |\\n|:------------------------:|:---------:|:---------:|:--------:|\\n| Reported Accuracy | 0.7500 | 0.7670 | 0.7825 |\\n\\n[1] Wang H, Li Y, Hu X, et al. Using EEG to Improve Massive Open Online Courses Feedback Interaction[C]//AIED workshops. 2013. \\n[2] Wang H, Wu Z, Xing E P. Removing confounding factors associated weights in deep neural networks improves the prediction accuracy for healthcare applications[C]//Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing. NIH Public Access, 2019, 24: 54. \\n[3] Lim Z Y, Neo Y L. Confused vs Non-Confused Electroencephalography Signal Classification Using Deep Learning Algorithm[C]//2023 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS). IEEE, 2023: 195-200. \\n\\n**W2: The motivation for employing Mamba, especially Bidirectional Mamba, in this work is not sufficiently clear or logically aligned with this objective.** \\nIn this work, we use the bidirectional Mamba for feature extraction from EEG signals, not only to reduce computational complexity but also to enhance the model\\u2019s overall ability to capture the characteristics of EEG signals. \\n**Let us be crystal clear: adopting bidirectional modeling does not affect the real-time detection capabilities of EEG signals.** In reality, whether it\\u2019s single-directional or bidirectional modeling, what we\\u2019re processing are already sampled EEG signal segments\\u2014not like in natural language tasks, where you need to predict the next word. Therefore, bidirectional modeling has no negative impact on real-time performance. \\nAs we know, Transformer-based models typically suffer from quadratic computational complexity, which leads to significant performance bottlenecks in practical applications. Take medical diagnostic scenarios, for example, such as sleep disorder detection or Alzheimer\\u2019s diagnosis, where you need to process long durations (e.g., several minutes) and a large number of channels (e.g., 62 channels) of EEG data. In such cases, Transformer-based models consume an enormous amount of GPU memory, making it nearly impossible to meet the real-time processing requirements. On the other hand, the Mamba model, with its linear computational complexity, consumes much less memory when handling long sequences, making it far better suited for high-load tasks like these. This is the primary reason we chose the Mamba model. \\nThe reason for choosing bidirectional Mamba is that the original Mamba model was designed for language generation tasks, which typically use single-directional modeling. But the nature of EEG signals makes single-directional modeling unsuitable\\u2014it could lead to the loss of earlier information during the scanning process. To more comprehensively and accurately capture the temporal features of EEG signals, we employed a bidirectional Mamba structure. Our ablation experiments also well demonstrated the performance difference between single-directional modeling and bidirectional modeling in EEG classification.\"}",
"{\"title\": \"Reply to Q4-Q6\", \"comment\": \"**Q4: section 4.1 Single-Task EEGMamba Performance Comparison.**\\n**A4:** As mentioned in A1.a, the term \\\"long\\\" in the context of sequence length for Transformer does not refer to a strict threshold, but rather to a gradual point at which the quadratic computational complexity of transformers becomes a practical limitation. This threshold can vary depending on the specific model architecture and the available computational resources. \\nIt is important to note that the length of the sequence used for model processing is not directly equal to the duration of the signal, but is also related to the sampling frequency. For example, for a sleep-state EEG with a duration of 30 seconds and a sampling rate of 200Hz, its length is 6,000 data points. For some Transformer-based models, this has led to the need for larger memory-usage and a sharp drop in reasoning speed. \\n\\n**Q5: section 4.2 EEGMamba for EEG Multi-Task Classification.** \\n**A5:** Multi-task training significantly improves the model\\u2019s ability to generalize across different datasets and tasks. While single-task models may excel on specific datasets, they often fail to achieve this across multiple tasks or datasets. By leveraging shared representations, EEGMamba can transfer knowledge from one task to improve performance on others, similar to how some foundation models are pre-trained to learn a common representation of EEG data and then fine-tuned for specific tasks. However, EEGMamba goes beyond this by integrating the entire process into an end-to-end system, allowing it to learn task-specific features while still benefiting from shared knowledge. \\nRegarding the current experimental results, we acknowledge that they are based on the mean and standard deviation from five-fold cross-validation, which involves a limited number of experiments. As a result, this may affect the statistical significance of the P-value. Like most AI studies [1][2][3], we did not report the statistical differences in the results. Nevertheless, we believe that while multi-task training may not always lead to improved performance on each individual dataset, it provides a more robust framework for handling a variety of tasks. \\n[1] Zhang D, Yuan Z, Chen J, et al. Brant-X: A Unified Physiological Signal Alignment Framework[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 4155-4166. \\n[2] Yang C, Westover M, Sun J. Biot: Biosignal transformer for cross-data learning in the wild[J]. Advances in Neural Information Processing Systems, 2024, 36. \\n[3] Jiang W, Zhao L, Lu B. Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI[C]//The Twelfth International Conference on Learning Representations. \\n\\n**Q6: Lines 454-456** \\n**Q6.a: Cross-Session Issues:** \\n**A6.a:** We apologize for any confusion caused by our lack of clarity. EEGMamba is capable of learning features from multiple datasets within a single training session, but this applies only to the datasets included in the training set, rather than to completely unseen datasets in a zero-shot setting. Therefore, for new, unseen datasets, fine-tuning as described in A2.a is still necessary. \\n**Q6.b: Generalization to New Datasets:** \\n**A6.b:** We described the method of using EEGMamba for new data in A2.a. On the one hand, if SHHS is not applied to training, it can be assigned the same task label as SleepEDF20 because its number of channels and classes are equal to SleepEDF20. For BCI-IV-2B, which has fewer channels, we need to give it a new task number and fine-tune it a few epochs, because the trained model includes the ability to extract motion imagination features, which is still fundamentally different from training a model from scratch. \\n**Q6.c: Domain-Specific Advantages:** \\nThe benefits of multi-task training are not strictly domain-specific but are related to the similarity and diversity of the tasks. For tasks with overlapping features, such as different types of brain-computer interface (BCI) applications, multi-task training can be advantageous. \\n**Q6.d: Practicality of Multi-Task Training:** \\nThe practicality of multi-task training with EEGMamba is that it provides a flexible framework that can be adapted to various research settings. For researchers focusing on a single task, EEGMamba can still be applied by treating it as a multi-task model with a single task. This approach can leverage the model\\u2019s ability to learn from any available related data, potentially improving performance and generalization. In scenarios where multiple tasks are relevant, such as simultaneous classification of motor imagery and emotion, EEGMamba\\u2019s multi-task training provides a more efficient and integrated solution.\"}",
"{\"title\": \"Response to weakness and questions\", \"comment\": \"Thank you for your valuable feedback. We greatly appreciate the time and effort you have dedicated to reviewing our work. The following are our detailed responses to your weaknesses and questions.\\n**W1: In Section 4.1, the discussion on single-channel and multi-channel models only compares memory usage and inference speed, without evaluating the impact of multi-channel models on performance metrics.** \\nThanks. A comparison of our performance metrics results is shown in Table 2-5, where the number of channels for each dataset is shown in Table 1, with the number of channels ranging from 1 to 62. And we can obtain the EEG signal length from each data set according to the rate\\u00d7duration in Table 1. At present, for single-task EEGMamba, the maximum length is 128\\u00d760=7680 from the DEAP dataset, which we consider to be a very large length. When training with EEGMamba, our operation to unify the sampling frequency to 200Hz resulted in more datasets of larger length, such as the length of DEAP dataset is 200\\u00d760=12000, and the length of SHHS dataset is 200\\u00d730=6000. EEGMamba has the best performance among all models in this case. \\n\\n**W2: The t-SNE visualization lacks layer-by-layer analysis of the model\\u2019s influence on clustering results, which does not adequately demonstrate the feature extraction capability of each layer.** \\nT-SNE is a dimensionality reduction technique that is useful for visualizing high-dimensional data in two or three dimensions, focusing on the relative distances between data points to reflect similarities or dissimilarities. In our study, we use t-SNE to qualitatively assess EEGMamba\\u2019s ability to extract discriminative features, showing clear separation between classes in the plot. This clustering indicates that the model effectively distinguishes between different data distributions, supporting our claims about its feature extraction capabilities. \\nHowever, it is crucial to note that the coordinates in a t-SNE plot lack inherent meaning, emphasizing relative positions rather than absolute values. While layer-by-layer t-SNE visualizations could offer additional insights, they are less informative due to the lack of inherent meaning in the coordinates. Instead, we focus on overall model performance, using metrics like classification accuracy, AUC-ROC, and F1 scores to provide a more direct and quantitative assessment. \\n\\n**Q1: In Section 4.2\\u2019s experimental comparison, was the model trained using all the datasets at once? Could there be interactions between the datasets? Would training the model on each dataset separately improve the performance metrics? Have you conducted any experiments on this? I\\u2019m quite interested.** \\n**A1:** You are correct in noting that our model uses all the datasets in a single training process, which allows for interactions between them. And in the Single-task EEGMamba experiment, we trained the model separately on different datasets, and the corresponding experimental results are presented in Tables 2-5. We sincerely hope this helps solve your doubts. \\nInteractions between datasets in multi-task learning can have both positive and negative effects. On the positive side, the model benefits from shared knowledge across tasks. For instance, certain brain activity patterns, such as oscillatory rhythms or temporal dynamics, may be consistent across different tasks. \\nHowever, conflicting features or task-specific noise can interfere with the model\\u2019s ability to learn meaningful representations for each individual task. To address these challenges, we implemented several strategies in EEGMamba: \\n**Task-Aware Modules:** A Mixture of Experts (MoE) model activates different experts based on the task, allowing the model to learn task-specific features while leveraging shared knowledge. \\n**Universal Expert:** Another key component of EEGMamba is the universal expert, which is designed to capture the cross-task features that are universally relevant, preventing overfitting to task-specific noise and enables better generalization across datasets. \\nIn summary, while interactions between datasets can lead to both positive and negative effects in multi-task learning, our model incorporates strategies such as task-aware modules and the universal expert to better exert positive effects and eliminate negative effects, improving EEGMamba\\u2019s performance across different EEG tasks.\"}",
"{\"comment\": \"Thank you for your response. Based on the author\\u2019s additional explanation, we believe this work presents a novel multi-task learning EEG classifier. By leveraging multi-task learning, the model's generalization capability is enhanced. Furthermore, the incorporation of Task-Aware Modules and a Universal Expert effectively capitalizes on the advantages of multiple datasets.\"}",
"{\"metareview\": \"The work considers the problem of multitask decoding from EEG (to understand a model that can predict for different tasks without retraining). To so so it introduces a new architecture by using a bidirectional mamba state-space model and a MoE approach that can orient the model prediction based on a class token. Results are reported on 8 public datasets. To cope with the variability of input channels a different set of spatial filters are trained for each dataset.\", \"notable_contributions\": [\"Mamba demonstrates memory efficiency and fast inference, making it advantageous for real-world applications.\", \"The model addresses the multi-task classification problem, showcasing how MoE enables prediction across a variety of downstream tasks.\"], \"weaknesses\": \"- As pointed out by uVYA the motivation for multi-task training is not fully unclear. Effectively, the model cannot generalize to new datasets or subjects, particularly when new datasets vary in channel count.\\n- Comparison with SoTA including single tasks models are missing. While one can acknowledge the multi-task aspect of the work, as just pointed above taking a new dataset and a new task does effectively require retraining. There is no zero-shot transfer to new tasks here. So from a pure application point of view, if the objective is to have the best BCI model or the best sleep model it is not clear if the multitask approach is reasonable. Also as pointed out by uVYA results on very classical EEG BCI are pretty low. This questions the significance of the work.\\n\\nWhile wapT is concerned about the claim of architectural novelty, the current decision is motivated by the 2 concerns above.\\n\\nAs a general comment to this community, it becomes very clear that benchmarks on such data need to be more standardized (test sets, validation strategy, etc.)\", \"additional_comments_on_reviewer_discussion\": \"Reviewer wapT remains unconvinced after the discussion and acknowledged reading the feedback from the authors. Relevant concerns from uVYA are partially addressed (cf. key concerns in metareview). Feedback from xgg9 is weakly positive but rather shallow on the contribution. Overall no reviewer clearly champions this contribution, mostly due to unclear positioning of the work and no clear wins on a downstream task.\"}",
"{\"summary\": \"To address the issues of quadratic computational complexity in handling long-term dependencies in EEG classification models, and the lack of cross-task generalization as most models are designed for single tasks, this paper proposes the EEGMamba model. The model introduces the ST-Adaptive module to address the problem of varying EEG signal lengths and channel numbers across different datasets. It also proposes the Bidirectional Mamba to improve computational efficiency and incorporates the MoE (Mixture of Experts) module to simultaneously capture both the commonalities and differences in EEG signals. Relevant experiments were conducted on eight datasets across four EEG tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents a novel multi-task EEG classification model that incorporates the Bidirectional Mamba and MoE modules, offering a structurally innovative approach. Experiments were conducted on eight datasets, covering a wide range of downstream EEG tasks. The paper is clearly written and easy to follow, with a particularly well-explained method section.\", \"weaknesses\": \"This paper proposes a multi-task EEG classification model aimed at performing classification across multiple datasets (downstream tasks) using a single model. However, the experimental results do not demonstrate a clear advantage over other single-task models, making it difficult to convincingly argue for the benefits and necessity of the multi-task approach. Additionally, the ablation study results for the various modules lack significant and consistent differences, making it challenging to prove the effectiveness of each module. Moreover, the motivation for using the Bidirectional Mamba is insufficiently justified. The ST-Adaptive method, proposed to address the issue of varying EEG channel numbers across datasets, is essentially a module integration, lacking in innovation.\\n1. In this work, the ST-Adaptive module applies a different one-dimensional convolution for each task, transforming the varying original channel numbers into a fixed number of $D$ channels. However, this approach does not seem to fully achieve the concept of being \\\"adaptive.\\\" If a new task emerges with a number of channels outside the predefined range of $C_0$ to $C_N$ in the model, how would this be addressed? This raises concerns about the generalizability and flexibility of the current method in handling unforeseen tasks with different channel configurations.\\n2. The purpose of using Mamba in this paper is to reduce computational complexity, conserve resources, and improve efficiency. Generally, the ultimate goal of improving efficiency in EEG models is to achieve real-time recognition. Given that EEG signals, like natural language, possess temporal characteristics, theoretically, a unidirectional Mamba would better meet this requirement, as it only requires past data rather than future information. The motivation for employing Mamba, especially Bidirectional Mamba, in this work is not sufficiently clear or logically aligned with this objective.\\n3. In Section 4.1, the authors present the performance of the single-task EEGMamba and other transformer-based models concerning memory usage and inference speed as the sequence length increases. However, it is not clearly evident from Figure 4 that EEGMamba demonstrates a significant advantage over the other methods. Additionally, while it is acknowledged that memory usage increases and inference speed decreases with longer sequence lengths for both single-channel and multi-channel scenarios, the authors do not specify the actual sequence lengths employed in the current eight EEG tasks. This omission lacks a reference point, making it difficult to ascertain whether EEGMamba exhibits superior performance. Furthermore, as indicated in Appendix I, EEGNet appears to perform better in terms of memory usage and inference speed, while also demonstrating commendable performance across various datasets. This further undermines the effectiveness of the proposed method in this paper.\\n4. In Section 4.2, the authors present the performance of EEGMamba in multi-task classification. However, I observe that EEGMamba does not demonstrate a significant advantage over the baseline models, and in many datasets, its performance is inferior to that of other single-task models, indicating that the multi-task approach does not facilitate mutual enhancement among tasks. Therefore, I question the necessity of employing a single model to address multiple tasks rather than utilizing several smaller models for different tasks, which might yield better results. The existing findings lack persuasiveness and do not adequately support the motivations for this work or the claims regarding the strong generalization capabilities of the proposed multi-task model.\\n5. Regarding Figure 5, I observe that, apart from the sleep stage task, where there is a considerable variation in the activation probabilities of different experts, the activation probabilities for the other tasks across the eight experts are generally quite uniform. This uniformity makes it challenging to demonstrate a particular preference for any specific expert. How can the effectiveness and necessity of the MoE approach be substantiated under these circumstances?\\n6. Figure 6 presents the ablation study results for the various modules of EEGMamba. However, the data indicate that these modules appear to have minimal discernible impact, as the experimental results across the Siena, CHB-MIT, and SHHS datasets show little variation. This raises concerns regarding the ability to substantiate the effectiveness of each module.\", \"questions\": \"1. Could you please specify how the EEG tasks are encoded into task tokens?\\n2. I noticed that the DEAP dataset was utilized in this study, but only data from four electrodes were selected. What is the rationale behind this choice? Additionally, regarding the binary classification on the DEAP dataset, does it pertain to valence, arousal, liking, or dominance? Furthermore, in Table 1, the authors provide the optimal segment lengths for all datasets. What references were used to determine these durations? I observed that, unlike most existing works that employ shorter segments of 1s, 2s, or 4s for the DEAP and SEED datasets, this paper utilizes segment lengths of 60s and 20s. What is the reasoning for selecting such unusually long data lengths?\\n3. This work employs five-fold cross-validation for data partitioning, which does not appear to be a commonly used EEG dataset partitioning method. What is the rationale or basis for this choice?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**W3: In Section 4.1, the authors present the performance of the single-task EEGMamba and other transformer-based models concerning memory usage and inference speed as the sequence length increases\\u2026**\\nFrom Figure 4, it is clear that, for other Transformer-based models, the blue line representing the Single-task EEGMamba shows a distinct advantage. The only model that can somewhat match Single-task EEGMamba when processing single-channel data is HCANN, but in multi-channel scenarios, its performance rapidly declines. Meanwhile, all the other models consistently perform worse than Single-task EEGMamba. \\nAs for the signal length, the actual sequence length for each signal can be obtained from the rate \\u00d7 duration in Table 1. We will consider replacing \\\"duration\\\" with \\\"sequence length\\\" to make it easier for the readers to understand. \\nClearly, EEGNet\\u2019s overall performance on the dataset we\\u2019re using is far from comparable to EEGMamba, especially when it comes to tasks like sleep stage detection, which involves longer signals. Additionally, EEGNet requires dedicated training for each dataset, which is quite a tedious process. \\n\\n**W4: In Section 4.2, the authors present the performance of EEGMamba in multi-task classification\\u2026** \\nEEGMamba has been evaluated on eight publicly available EEG datasets across four different tasks, demonstrating superior performance in seizure detection, emotion recognition, sleep stage classification, and motor imagery. Our model ranks among the top three on seven datasets and achieves the best performance on four datasets, outperforming existing state-of-the-art (SOTA) models across multiple datasets. \\nIn addition, EEGMamba is an end-to-end system that does not require separate pre-training and fine-tuning stages, offering stronger generalization ability than pre-trained models. Unlike other classification networks that require multiple training sessions with manual adjustments to data length, channel count, and class numbers, EEGMamba only needs to be trained once to achieve these results. \\n\\n**W5: Regarding Figure 5, I observe that, apart from the sleep stage task\\u2026** \\nWhen it comes to the activation probabilities of experts, it\\u2019s not only important to observe the differences in activation probabilities between experts for a given task, but even more crucial to focus on the tendency of experts being activated across different tasks. For instance, in the task of seizure detection, experts 5 and 6 appear to be more easily activated, while for tasks like emotion recognition and motor imagery, experts 2 and 4 are more likely to be chosen, respectively. This suggests that different tasks tend to favor different experts. In other words, each task seems to have its own preference for specific experts, which is an interesting characteristic that could be leveraged to improve task-specific performance. \\n\\n**W6: Figure 6 presents the ablation study results for the various modules of EEGMamba\\u2026** \\nWe sincerely apologize, but due to the particularly poor performance of certain variants (such as the unidirectional Mamba), it\\u2019s been difficult to further adjust the y-axis range in a way that would make the differences more visually distinct. Additionally, for some datasets and tasks, the inherent characteristics of the dataset (for example, the extreme class imbalance in the epilepsy dataset) make it quite challenging to observe significant differences when evaluating using accuracy. We recommend that you enlarge the image up to 400% so that you\\u2019ll be able to more clearly see the differences between the various variants.\", \"title\": \"Reply to W3-W6\"}",
"{\"title\": \"Reply to Q1\", \"comment\": \"Thank you for your insightful comments. We appreciate the opportunity to provide further clarification and evidence to support our claim. We have noticed that the weakness you mentioned has a corresponding relationship with the question, so we will answer the question directly. Here is our detailed response:\\n\\n**Q1. Introduction: (line 49-50):** \\n**Q1.a: For EEG classification, what defines a \\\"long\\\" signal in terms of sample size? At what point does a short receptive field cause CNN performance to degrade?** \\n**A1.a:** In the context of our manuscript, \\\"long\\\" refers to EEG signals that exceed the typical short segments (e.g., several seconds) that CNNs are traditionally designed to handle well. Our claim is based on the observation that as the sequence length increases, the performance of CNN-based models, which lack a mechanism to explicitly model long-range dependencies, can degrade. For example, EEG signals for sleep monitoring must have a standard length of 30s, which tends to result in a larger length, and we also observed that CNN-based EEGNet did not perform very well on the sleep stage classification task. \\nTo clarify, the degradation in performance is not a hard threshold but a gradual effect that becomes more pronounced with longer sequences. The exact point at which performance degrades can vary based on the specific architecture, the complexity of the task, and the characteristics of the EEG signals. \\n\\n**Q1.b: Is global sequence modelling truly necessary for long-term EEG signals? From a neuroscience perspective, how much does brain activity from, say, 10 seconds ago, affect the current state? Which types of tasks specifically require such long-term modelling?** \\n**A1.b:** Global sequence modeling is not necessary for every EEG task, but it is essential for tasks where the temporal dynamics of brain activity play a key role in accurate classification. From a neuroscience perspective, while the effects of brain activity may diminish over time, certain types of brain activity (such as those associated with memory recall or seizures) may have lasting effects. On the other hand, long-term patterns in some states are crucial for understanding brain states, such as sleep stage classification. \\nIn sleep stage classification, longer EEG segments (such as 30 seconds) are required due to the cyclical nature of the sleep cycle and the evolving pattern of brain activity. During sleep, especially in the deep sleep stages, EEG features tend to be slow and persistent. Shorter clips may not be enough to capture a steady pattern of these activities, while 30-second segments often accurately reflect the brain\\u2019s electrical activity at each sleep stage. Therefore, for tasks such as sleep stage classification, global sequence modeling, which captures both short-term and long-term dependencies, is essential for accurate classification.\"}",
"{\"title\": \"A reminder to reviewerSRQL\", \"comment\": \"Dear Reviewer SRQL,\\n\\nWe would like to gently remind you that we have submitted our response to the concerns you raised. We hope these further explanations help to resolve any outstanding issues. We sincerely hope that you might consider revising your score based on our response. If you have any questions, we would be more than happy to discuss with you. \\n\\nThank you again for the time and effort you put into reviewing our manuscript.\\n\\nBest regards, \\n\\nThe authors\"}",
"{\"title\": \"Reply to Q3-Q5\", \"comment\": \"**Q3: How would EEGMamba perform on out-of-distribution tasks or tasks not seen during training? Have you tested its ability to generalize to entirely new EEG task types?**\\n**A3:** Thank you for the good suggestion. We use the Confused student EEG brainwave data [1] (hereinafter referred to as Confused EEG), which is a completely new task for EEGMamba. We applied the existing weights (i.e., the weights corresponding to Table 2-5 in the manuscript) to the Confused EEG, using 4 different random numbers to obtain a 7:3 training-test set ratio, similar to the approach in [3] (while [2] used five-fold cross-validation). We trained for 10 epochs each time and averaged the results. The test results are shown as follows: \\n| Classification Network | [2], 2019 | [3], 2023 | EEGMamba |\\n|:------------------------:|:---------:|:---------:|:--------:|\\n| Reported Accuracy | 0.7500 | 0.7670 | 0.7825 |\\n\\n[1] Wang H, Li Y, Hu X, et al. Using EEG to Improve Massive Open Online Courses Feedback Interaction[C]//AIED workshops. 2013. \\n[2] Wang H, Wu Z, Xing E P. Removing confounding factors associated weights in deep neural networks improves the prediction accuracy for healthcare applications[C]//Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing. NIH Public Access, 2019, 24: 54. \\n[3] Lim Z Y, Neo Y L. Confused vs Non-Confused Electroencephalography Signal Classification Using Deep Learning Algorithm[C]//2023 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS). IEEE, 2023: 195-200. \\n\\n**Q4: Could you clarify how this work builds on or differs from previous applications of SSMs or Mamba models in EEG classification? Adding this context could help readers better understand the specific contributions of EEGMamba.** \\n**A4:** We searched for relevant keywords in Google Scholar and reviewed the literature published before the paper submission deadline. Most of the studies applying Mamba to EEG are still in preprint form and have not yet undergone peer review. Therefore, we provide a brief summary here to give readers an overview of the application of SSMs or Mamba models in EEG classification. \\n[1] is the first application of Mamba to EEG used a single-directional Mamba model, but the article provides only a brief introduction to the methodology and experiments. [2] proposed a self-supervised learning (SSL) framework for sleep stage classification, utilizing a Mamba-based temporal context module to capture relationships among different EEG epochs, although the main component is still a Transformer-based model. [3] used Brain Timeseries Mamba and Brain Network Mamba module to encode the spatiotemporal features and long-term dependencies of EEG signals, respectively, to achieve efficient EEG classification. \\n[1] Panchavati S, Arnold C, Speier W. Mentality: A Mamba-based Approach towards Foundation Models for EEG [J]. \\n[2] Lee C H, Kim H, Han H, et al. NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG[J]. arXiv preprint arXiv:2404.17585, 2024. \\n[3] Behrouz A, Hashemi F. Brain-mamba: Encoding brain activity via selective state space models[C]//Conference on Health, Inference, and Learning. PMLR, 2024: 233-250. \\n\\n**Q5: Can you evaluate this on longer contexts to better get a sense for the necessity and usefulness for Mamba?** \\n**A5:** Thank you for your suggestion. We can obtain the EEG signal length from each data set according to the rate\\u00d7duration in Table 1. At present, for single-task EEGMamba, the maximum length is 128\\u00d760=7680 from the DEAP dataset, which we consider to be a very large length. When training with EEGMamba, our operation to unify the sampling frequency to 200Hz resulted in more datasets of larger length, such as the length of DEAP dataset is 200\\u00d760=12000, and the length of SHHS dataset is 200\\u00d730=6000. EEGMamba has the best performance among all models in this case.\"}",
"{\"summary\": \"The authors introduce EEGMamba, a model designed for multi-task EEG classification. EEGMamba consists of an ST-Adaptive module that learns spatial filters for each task, transforming EEG inputs with varying channel counts into a uniform feature space. The module then tokenizes the data using both small and large kernels to capture short-term and long-term features. These tokens are processed by a BiMamba backbone with task-aware Mixture of Experts (MoE) layers, enabling the model to capture both task-specific and shared features. Finally, each task has a dedicated classification head.\\n\\nEEGMamba allows for multi-task EEG classification in a single training session. The authors evaluated the model\\u2019s performance against five other models across eight public datasets, covering tasks such as epilepsy detection, sleep stage classification, emotion recognition, and motor imagery. The experiments used a 5-fold cross-validation approach, where specific subjects were reserved for the test set in each fold. Results show that EEGMamba outperformed competing models under this evaluation, and it demonstrated efficient memory usage and inference speed, particularly with long-sequence data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors successfully introduce the new Mamba architecture to EEG decoding, achieving strong results across 8 public datasets.\\n\\n2. Mamba demonstrates memory efficiency and fast inference, making it advantageous for real-world applications.\\n\\n3. The model effectively addresses the multi-task classification problem, showcasing the feasibility of training a single model for multiple downstream tasks.\", \"weaknesses\": \"1. The motivation for multi-task training is somewhat unclear. The authors should clarify why multi-task training is necessary and how it can be beneficial for specific applications.\\n\\n2. Certain questions remain unanswered regarding the practical use of the proposed model, which limits its potential impact. For instance, under the current evaluation scheme, it is unclear how the model would generalize to new datasets or subjects, particularly when new datasets vary in channel count. Does introducing a new dataset require retraining or additional multi-\\n\\ntask training even if it is single-task? Additionally, what would be the training strategy for developing a subject-dependent model within the multi-task framework if only one task is available from the subject? To address this, the authors could consider testing the model on an additional dataset to evaluate whether the pre-trained model can transfer effectively. If it cannot, are there still advantages to the EEGMamba multi-task approach?\\n\\n3. The authors should discuss the relatively low classification accuracy on the motor imagery datasets, which is currently too low for practical motor imagery classification. If this is due to the evaluation setting, additional experiments with per-subject models should be conducted to assess performance, and these results should be compared to other models.\\n\\n4. Statistical tests are needed to confirm whether the observed differences between models or modules are significant. For instance, statistical analysis should be conducted for the results in Figure 1 comparing EEGMamba and Single-task EEGMamba, as well as for the different ablation models in Figure 6.\", \"questions\": \"1. Introduction: (line 49-50):\\n\\nThe authors claim that CNNs are unable to handle long EEG signals, citing three papers: (Sakhavi etal.,2018), (Thuwajit etal.,2021) and (Schirrmeister etal.,2017). However, none of these studies provide evidence to support such a conclusion. In fact, Thuwajit et al. (2021) proposed EEGWavenet, which utilizes a multiscale CNN-based spatiotemporal feature extraction module. This module gradually increases its receptive field to match the length of the input EEG signal, indicating that CNNs can handle the global input sequences in certain contexts. While the authors' claim is not entirely substantiated, it raises interesting questions that I would like to see addressed:\\n\\na. For EEG classification, what defines a \\\"long\\\" signal in terms of sample size? At what point does a short receptive field cause CNN performance to degrade?\\n\\nb. Is global sequence modelling truly necessary for long-term EEG signals? From a neuroscience perspective, how much does brain activity from, say, 10 seconds ago, affect the current state? Which types of tasks specifically require such long-term modelling?\\n\\n2. section 2.2 ST-Adaptive Module:\\n\\nThe authors propose a spatio-temporal-adaptive module that transforms arbitrary EEG inputs into a uniform feature dimension, as depicted in Figure 3. However, I have several concerns:\\n\\na. Scalability to New Datasets: Is this approach scalable to new datasets once the model is trained? Given that different Conv1d layers are used to transform varying numbers of EEG channels into a common hidden state in the Spatial-Adaptive Convolution, how flexible is this method for new tasks where the number of channels may differ from the training datasets? Even if the number of channels is the same, the channels themselves vary. It appears that the model learns dataset-specific spatial filters rather than a truly global spatial representation, which may limit its generalizability to new datasets, which is a big issue for a backbone model, since people would like to use it on different tasks later on.\\n\\nb. Tokenize Layer: In the tokenize layer, two branches of CNN are used: one with a short kernel (15) and another with a long kernel (49). Was this choice based on experimental results? Why are there only two branches, and why were kernel sizes 15 and 49 specifically chosen? Since there\\u2019s no discussion of this configuration in the ablation study, it\\u2019s unclear whether this is the most optimal setup.\\n\\n3. section 3.2 Data Division:\\n\\nIn this study, the 5-fold cross-validation experiment was implemented as leave-subject-out, which is not a common approach in the BCI field due to the significant subject variability. This evaluation approach faces two challenges: (1) developing a population model trained on a group of subjects, and (2) addressing subject transfer by evaluating the model on unseen subjects. This raises several concerns:\\n\\na. Subject Variability Impact: In tasks with minimal subject variability, such as seizure detection and sleep stage detection, the classification results are high, as shown in Figure 1. However, tasks like motor imagery and other BCI tasks exhibit high subject variability, which severely impacts model performance. This is evident in the low accuracies achieved on the BCI-IV-2a task (44%) and the SEED task (57.2%), which are insufficient for practical BCI applications. A discussion on this performance discrepancy is necessary to demonstrate how the proposed model addresses these challenges and whether it shows superiority in domains with high subject variability.\\n\\nb. Performance Discrepancy with Benchmark Models: Many benchmark models, such as EEG Conformer (Song et al., 2022), were not designed to handle population transfer. EEG Conformer, for instance, was trained in a subject-specific manner and achieved state-of-the-art performance on the same BCI-IV-2a (78.66%) and SEED (95.30%) datasets. However, in this study, the EEG Conformer\\u2019s performance dropped significantly to 35.2% and below 50%, respectively. Could the authors explain this stark performance difference? Is it primarily due to the leave-subject-out evaluation setting? If so, would the proposed EEGMamba model also retain its high performance if evaluated in a subject-specific manner like EEG Conformer?\\n\\nc. Use of Separate Test Sets: Datasets like BCI-IV-2a include a separate test set specifically intended for evaluation. In this study, it is unclear whether the authors utilized this test set. Since many studies report performance on this designated test set, comparing the classification accuracy reported here with those from other studies may not be straightforward. Clarification on whether the official test set was used, or if an alternative test split was applied, is needed to ensure a fair comparison with prior work.\\n\\n4. section 4.1 Single-Task EEGMamba Performance Comparison\", \"line_377\": \"The authors mention the memory and inference time challenges of transformer models when handling long sequences. Similar to my previous concern, what qualifies as a \\\"long\\\" sequence in terms of signal length for transformers to encounter this bottleneck? Is there a real-world application where such long sequences need to be tackled? Many EEG tasks are only a few seconds in length, so it would be helpful to clarify the practical need for handling significantly longer sequences.\\n\\n5. section 4.2 EEGMamba for EEG Multi-Task Classification\\n\\nWhile the idea of training a single model to perform well across multiple datasets is interesting, its practical application is unclear. From Figure 1, the single-task models appear to achieve similar or even better performance on 5 out of the 8 benchmarked datasets (Siena, BCI-IV-2a, Shu, SEED, CHB-MIT). This raises a few questions:\\n\\na. Benefit of Multi-Task Training: Is there a demonstrable benefit to multi-task training? Specifically, is there any statistical difference between the performance of the single-task model and the multi-task model on the datasets? It would be helpful to clarify whether multi-task training consistently improves performance or if its benefits are marginal.\\n\\n6. Lines 454-456:\\n\\nThe authors argue that the model only needs to be trained once. However, the analysis was performed offline on 8 selected public datasets solely. To assess whether this model can be applied in real-world scenarios, the following questions need to be addressed:\\n\\na. Cross-Session Issues: In practice, many EEG-based applications require short calibration sessions to adjust for cross-session variability in subject-dependent models. Does the proposed model also require such calibration? If calibration is needed, would this involve further training or fine-tuning of the pre-trained model? In the case of the multi-task model, would calibration require data from other tasks as well, or could it be done independently?\\n\\nb. Generalization to New Datasets: How well does the model perform on a new dataset that belongs to one of the pre-trained tasks? If the model only needs to be trained once, does this mean it can be directly applied to similar tasks without additional training? For example, how would the model handle BCI-IV-2B, which has only 3 channels but is still a motor imagery task, or another sleep stage classification dataset? If yes, how would the model manage inconsistencies in the number of channels, and what performance can be expected? If not, wouldn\\u2019t this imply that researchers would still need to retrain the model, making it not different from other models?\\n\\nb. Domain-Specific Advantages: If multi-task training is advantageous, is this benefit domain-specific? For example, should a researcher developing a motor imagery decoder expect better results from multi-task training? How can researchers determine whether a single-task or multi-task approach will yield better performance for a specific domain or task?\\n\\nc. Practicality of Multi-Task Training: In practice, most researchers focus on specific tasks and experiments, collecting data for single tasks. Does this multi-task approach suggest that researchers in the future should also record additional tasks or rely on public datasets to improve performance? Or is there a scenario where it would make sense for a model to simultaneously classify motor imagery, emotion, sleep stages, and seizure events? More guidance on when and why to use multi-task training would be valuable.\\n\\n7. section 4.4 Ablation Study:\\n\\nIn Figure 6, it appears that the performances of the different configurations, except for the single-directional Mamba, are quite similar. Is there a statistically significant difference between these configurations? It would be helpful to include a discussion on whether the variations in performance are meaningful or simply within the margin of error.\\n\\n8. Conclusion lines 526-527:\\n\\nThe authors claim that EEGMamba is the first model to truly implement multi-task learning for EEG applications. However, I am curious about how EEGMamba differs from a simpler approach, such as using a shared backbone model as a feature extractor with separate classification heads for different tasks, as done in [1]. Could the authors clarify the key differences between the proposed model and such an approach using MoE?\\n\\n[1] Xie Y, Wang K, Meng J, Yue J, Meng L, Yi W, Jung TP, Xu M, Ming D. Cross-dataset transfer learning for motor imagery signal classification via multi-task learning and pre-training. J Neural Eng. 2023 Oct 20;20(5). doi: 10.1088/1741-2552/acfe9c. PMID: 37774694.\\n\\n9. Conclusion line 531:\\n\\nThe authors claim that the proposed model can better learn the commonalities among EEG signals from different tasks. However, what specific commonalities are being referred to? Is there any interpretation or evidence to support this claim? It would be helpful to understand how these commonalities are identified and whether the model offers any insight into them.\\n\\nAddressing the above questions could help clarify specific weaknesses and improve the overall impact of the study, more specifically:\", \"question_5\": \"Answering this question would clarify Weakness Point 1, providing more insight into the motivation for multi-task training.\\n\\nQuestions 1, 2(a), and 6: Answering these would address Weakness Points 2 and 3, which would significantly enhance the study's potential impact and score by clarifying practical applications and the model\\u2019s performance on motor imagery datasets.\", \"questions_4_and_7\": \"These would address Weakness Point 4 by supporting the comparisons with a statistical validation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper presents EEGMamba, a model tailored for multi-task EEG classification. It aims to overcome the limitations of existing models in terms of computational complexity and generalization across tasks with varying signal lengths and channel counts. EEGMamba integrates three main innovations: the Spatio-Temporal-Adaptive (ST-Adaptive) module for unified feature extraction, Bidirectional Mamba to balance accuracy and computational efficiency, and a Task-aware Mixture of Experts (MoE) to handle the differences and similarities across EEG tasks. Evaluated across eight public datasets and covering seizure detection, emotion recognition, sleep stage classification, and motor imagery, EEGMamba demonstrates strong performance and adaptability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"EEGMamba achieves state-of-the-art performance across multiple EEG tasks, demonstrating robust multi-task capability. Its bidirectional Mamba blocks enable efficient handling of long sequences, avoiding the high memory demands of Transformer models. The flexible ST-Adaptive module supports EEG signals of varied lengths and channels, and the task-aware Mixture of Experts (MoE) enhances task-specific accuracy, reducing interference across tasks. This adaptability and strong generalization across datasets position EEGMamba as a versatile model for EEG classification.\", \"weaknesses\": \"EEGMamba\\u2019s evaluation lacks comparison the newest baseline models (notably LaBRaM from ICLR 2024), which limits the interpretability of its reported gains. The Spatio-Temporal-Adaptive (ST-Adaptive) module, while flexible for varying signal lengths and channel counts, may not adequately capture complex channel-time dependencies crucial in EEG data. Furthermore, training the ST module on a per-task basis could lead to representations that are too specialized, reducing generalizability, particularly for out-of-distribution (OOD) tasks, which were not included in the study. Additionally, the paper lacks a related works section detailing prior applications of SSMs or Mamba in EEG classification, making it difficult to contextualize EEGMamba\\u2019s specific advancements in this space.\", \"questions\": \"Could you elaborate on the choice of baseline models? How does EEGMamba\\u2019s performance compare with multi-task EEG models specifically designed for generalization across diverse tasks?\\n\\nHow does the ST-Adaptive module capture interactions between channels and temporal features, which are critical in EEG data? Have you considered modeling these dependencies more explicitly?\\n\\nHow would EEGMamba perform on out-of-distribution tasks or tasks not seen during training? Have you tested its ability to generalize to entirely new EEG task types?\\n\\nCould you clarify how this work builds on or differs from previous applications of SSMs or Mamba models in EEG classification? Adding this context could help readers better understand the specific contributions of EEGMamba.\\n\\nCan you evaluate this on longer contexts to better get a sense for the necessity and usefulness for Mamba?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**Q7: section 4.4 Ablation Study**\\n**A7:** As mentioned in A5, our ablation experiments also used five-fold cross-validation\\uff0cwhich involves a limited number of experiments. As a result, this may affect the statistical significance of the P-value. Like most AI studies [1][2][3], we did not report the statistical differences in the results. \\n[1] Zhang D, Yuan Z, Chen J, et al. Brant-X: A Unified Physiological Signal Alignment Framework[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 4155-4166. \\n[2] Yang C, Westover M, Sun J. Biot: Biosignal transformer for cross-data learning in the wild[J]. Advances in Neural Information Processing Systems, 2024, 36. \\n[3] Jiang W, Zhao L, Lu B. Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI[C]//The Twelfth International Conference on Learning Representations. \\n\\n**Q8: Conclusion lines 526-527** \\nWe appreciate the opportunity to clarify how EEGMamba differs from models that use a shared backbone with separate classification heads. \\n- **End-to-End Training**: EEGMamba is trained end-to-end, allowing all components of the model to learn jointly, which makes EEGMamba highly efficient, as it avoids the need for multiple rounds of fine-tuning for each task. This is in contrast to models that train a shared backbone separately and then attach task-specific heads, which can be time-consuming and less efficient, especially when working with a large number of diverse tasks. \\n- **Multi-Task Learning Capability**: EEGMamba is designed to handle multiple tasks and datasets simultaneously within a single training session. Unlike existing foundation models, which can only process one dataset at a time during fine-tuning, EEGMamba is capable of delivering results from multiple datasets simultaneously in a single session. \\nIn summary, EEGMamba offers a more integrated and sophisticated approach to multi-task learning for EEG applications. We hope this clarification highlights the unique advantages of EEGMamba and addresses your query effectively. \\n\\n**Q9: Conclusion line 531** \\n**A9:** The commonalities in EEG signals across different tasks are rooted in shared neurophysiological processes present in various cognitive and physiological states. These commonalities include oscillatory activities (such as alpha, beta, and delta rhythms), event-related potentials (ERPs) that are time-locked to specific events, and coherence across brain regions, all of which appear in multiple tasks. Additionally, EEG signals often exhibit similar temporal dynamics, such as common patterns of slow-wave activity during transitions between sleep stages or the onset of seizures. \\nEEGMamba\\u2019s ability to capture and leverage these commonalities is evidenced by its superior performance across eight diverse EEG datasets, including tasks like seizure detection, emotion recognition, and motor imagery, indicating its strong generalization capability. Further support comes from ablation studies, which show that removing the universal expert\\u2014designed to capture cross-task features\\u2014results in a performance drop, underscoring the importance of shared knowledge for generalization.\"}",
"{\"summary\": \"The paper introduces EEGMamba, a novel EEG classification network designed for multitask learning. It integrates spatiotemporal adaptive (ST-Adaptive) modules, bidirectional Mamba, and a mixture of experts (MoE) approach. This addresses challenges in EEG classification, such as computational complexity and variations in signal length and channels. The model efficiently handles long sequences and adapts to feature extraction while capturing both task-specific and general features. Evaluations on multiple public EEG datasets demonstrate that EEGMamba outperforms existing models in seizure detection, emotion recognition, sleep quality, and emotion recovery.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"### **Originality**\\nThe originality of EEGMamba lies in its novel approach to EEG classification through the integration of bidirectional Mamba, Spatio-Temporal-Adaptive (ST-Adaptive) modules, and task-aware Mixture of Experts (MoE). The innovative combination of these elements addresses the computational complexity and variability in signal length and channels, which are critical challenges in EEG classification. This creativity, particularly in applying multitask learning to EEG signals, represents a significant advancement in the field.\\n\\n### **Quality**\\nThe quality of this work is demonstrated through rigorous evaluations on multiple publicly available EEG datasets. EEGMamba's superior performance in seizure detection, emotion recognition, sleep quality, and emotion recovery highlights its robustness and effectiveness. The model's ability to handle long sequences and adapt to different feature extraction tasks while maintaining high accuracy and fast inference speed.\\n\\n### **Clarity**\\nThe authors provide a detailed description of the EEGMamba architecture and its components. The step-by-step explanation of how the bidirectional Mamba, ST-Adaptive module, and task-aware MoE are integrated and function together contributes to a well-structured and coherent narrative.\\n\\n### **Significance**\\nThe model's design, which allows for the efficient capture of both task-specific and general features, has the potential to transform how EEG data is processed and analyzed. This can lead to more accurate and comprehensive analyses of complex brain signal data, benefiting various applications such as medical diagnostics and cognitive research.\", \"weaknesses\": \"Overall, the experiments in this paper are comprehensive; however, in Section 4.1, the discussion on single-channel and multi-channel models only compares memory usage and inference speed, without evaluating the impact of multi-channel models on performance metrics. Additionally, the t-SNE visualization lacks layer-by-layer analysis of the model's influence on clustering results, which does not adequately demonstrate the feature extraction capability of each layer. It is recommended to visualize feature clustering by module.\", \"questions\": \"In Section 4.2's experimental comparison, was the model trained using all the datasets at once? Could there be interactions between the datasets? Would training the model on each dataset separately improve the performance metrics? Have you conducted any experiments on this? I'm quite interested.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you for reviewing our paper again. We understand your concerns regarding the experimental comparisons and believe we have provided sufficient clarification in our response. We trust that our experimental design and results adequately support the conclusions drawn in the paper, and we hope this further addresses any remaining doubts. Thank you for your time and consideration.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Dear Reviewer xgg9,\\n\\nThank you for your thoughtful feedback. We sincerely appreciate your recognition of the novelty of our multi-task learning EEG classifier. We believe that this work can contribute to the application of artificial intelligence in the EEG research community.\\n\\nThank you again for the time and effort you put into reviewing our manuscript.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"title\": \"Response to weakness and questions\", \"comment\": \"Thanks. It seems there has been a misunderstanding on the definition of a multitask classification model, so perhaps a brief clarification is in order. **As we clearly stated in the paper, our model is designed to handle classification tasks across multiple datasets simultaneously, without the need for the fine-tuning training on each dataset separately**\\u2014unlike the base models cited by reviewer wapT. Therefore, the references provided by reviewer wapT [1, 2, 5, 6] clearly do not represent multitask models, as they require dedicated fine-tuning for each specific dataset.\\n\\n**W1: The author claims that EEGmamba is the first universal EEG classification network to effectively implement multi-task learning for EEG applications. However, several established methods are available that can be applied to multi-task EEG learning, as cited in [1][2][3].** \\nAs we have stated in a previous clarification, [1, 2] are not multitask models. [3] is a multitask model. However, since it was published recently (within a month of submission deadline) and has not yet undergone peer review, we will not consider it for comparison at this time. \\nMoreover, we have noticed that the paper submitted to this conference, *NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals*, bears a striking resemblance to [3]. And we have observed that several reviewers have already expressed concerns about the performance of [3]: \\\"4vcv: **NeuroLM\\u2019s performance is currently lower compared to LaBraM on most tasks**\\\" and \\\"2GzD: **While NeuroLM performs well in multi-task learning, there are still gaps in its performance on some tasks compared to state-of-the-art models specifically designed and optimized for a single task (e.g., LaBraM)**\\\" We fully agree with these assessments, and we have added a comparison with LaBraM. However, **let it be clear: we are not comparing LaBraM as a multitask model similar to ours**, but rather as another baseline comparable to BIOT. [3] can also support us in doing so by the fact that Tables 2-4 in [3] give LaBraM a \\u2018\\u00d7\\u2019 for multitask performance. \\nTherefore, we sincerely hope that reviewer wapT could take the time to carefully read the relevant literature, and consider change the review points. \\n| Classification Network | Multitask Model | Epilepsy detection | | Sleep stages classification | | Emotion recognition | | Motor imagery | |\\n|:------------------------:|:---------------:|:--------------------:|:-------------:|:-----------------------------:|:-------------:|:---------------------:|:-------------:|:---------------:|:-------------:|\\n| | | Siena | CHB-MIT | SleepEDF-20 | SHHS | DEAP | SEED | Shu | BCI-IV-2a |\\n| LaBraM | \\u00d7 | 0.9886\\u00b10.0043 | 0.9742\\u00b10.0099 | 0.7503\\u00b10.0388 | 0.7785\\u00b10.0243 | 0.5822\\u00b10.0321 | OOM | 0.5368\\u00b10.0312 | 0.2879\\u00b10.0160 |\\n| Single-task EEGMamba | \\u00d7 | 0.9897\\u00b10.0053 | 0.9817\\u00b10.0036 | 0.8387\\u00b10.0399 | 0.8441\\u00b10.0163 | 0.5985\\u00b10.0247 | 0.5779\\u00b10.0584 | 0.6169\\u00b10.0467 | 0.4596\\u00b10.0547 |\\n| EEGMamba | \\u221a | 0.9897\\u00b10.0038 | 0.9789\\u00b10.0132 | 0.8486\\u00b10.0276 | 0.8478\\u00b10.0177 | 0.5994\\u00b10.0134 | 0.5646\\u00b10.0366 | 0.6207\\u00b10.0505 | 0.4231\\u00b10.0522 |\\n\\n**W2: Inappropriate Comparisons: The choice of baselines for comparison lacks relevance. \\u2026 To evaluate cross-task capabilities, comparisons should involve multi-task learning methods like those in [1][2][3][5][6].** \\nAs we have pointed out in a previous clarification, [1, 2, 5, 6] are not multitask models. Among them, [5, 6] are both specifically designed for intracranial EEG signals and only uses SEEG data for pretraining. EEG and SEEG are two distinct modalities, and to my knowledge, there is no literature that suggests the differences between them can be simply ignored. If reviewer wapT can provide such references, we might consider [5, 6] as baselines. Additionally, we noticed that the papers cited by reviewer wapT, [1, 3], also do not compare with the earlier works [5, 6], which also supports our view. \\n\\n**W3: Efficiency Evaluation: The author should provide quantitative evidence demonstrating the proposed method\\u2019s efficiency advantages over previous approaches.** \\nOn the one hand, Table 2-5 in our manuscript has used quantitative indicators ACC, AUROC, F1 to give the experimental results of EEGMamba compared with previous methods, proving the performance advantage of EEGMamba. On the other hand, the computational efficiency of the Mamba-based model is shown in Figure 4. We are not sure what \\\"quantitative evidence\\\" means, perhaps you could be more specific. \\n\\nThe references order in our response is the same as that of the reviewer wapT.\"}",
"{\"title\": \"Reply to Q1-Q2\", \"comment\": \"Thank you for your detailed and thoughtful review of our manuscript. In this response, we will provide further clarification and empirical evidence to address the concerns you raised. Since we have noticed that the weakness you mentioned has a corresponding relationship with the question, we will answer the question directly. Here is our detailed response:\\n**Q1: Could you elaborate on the choice of baseline models? How does EEGMamba\\u2019s performance compare with multi-task EEG models specifically designed for generalization across diverse tasks?** \\n**A1:** We choose EEGNet, AttnSleep, EEGConformer and HCANN as single-task classification models for comparison. BIOT serves as a representative model that is pre-trained and then fine-tuned on a single dataset. And as you suggested, we added a performance comparison with LaBraM, which is the same type model as BIOT. \\n| Classification Network | Multitask Model | Epilepsy | detection | Sleep stage | classification | Emotion | recognition | Motor | imagery |\\n|:------------------------:|:---------------:|:--------------------:|:-------------:|:-----------------------------:|:-------------:|:---------------------:|:-------------:|:---------------:|:-------------:|\\n| | | Siena | CHB-MIT | SleepEDF-20 | SHHS | DEAP | SEED | Shu | BCI-IV-2a |\\n| LaBraM | \\u00d7 | 0.9886\\u00b10.0043 | 0.9742\\u00b10.0099 | 0.7503\\u00b10.0388 | 0.7785\\u00b10.0243 | 0.5822\\u00b10.0321 | OOM | 0.5368\\u00b10.0312 | 0.2879\\u00b10.0160 |\\n| Single-task EEGMamba | \\u00d7 | 0.9897\\u00b10.0053 | 0.9817\\u00b10.0036 | 0.8387\\u00b10.0399 | 0.8441\\u00b10.0163 | 0.5985\\u00b10.0247 | 0.5779\\u00b10.0584 | 0.6169\\u00b10.0467 | 0.4596\\u00b10.0547 |\\n| EEGMamba | \\u221a | 0.9897\\u00b10.0038 | 0.9789\\u00b10.0132 | 0.8486\\u00b10.0276 | 0.8478\\u00b10.0177 | 0.5994\\u00b10.0134 | 0.5646\\u00b10.0366 | 0.6207\\u00b10.0505 | 0.4231\\u00b10.0522 |\", \"the_difference_between_eegmamba_and_these_models_is_that\": \"**1. End-to-End Training:** EEGMamba is trained end-to-end, allowing all components of the model to learn jointly, which makes EEGMamba highly efficient, as it avoids the need for multiple rounds of fine-tuning for each task. This is in contrast to models that train a shared backbone separately and then attach task-specific heads, which can be time-consuming and less efficient, especially when working with a large number of diverse tasks. \\n**2. Multi-Task Learning Capability:** EEGMamba is designed to handle multiple tasks and datasets simultaneously within a single training session. Unlike existing foundation models, which can only process one dataset at a time during fine-tuning, EEGMamba is capable of delivering results from multiple datasets simultaneously in a single session. \\nIn summary, EEGMamba offers a more integrated and sophisticated approach to multi-task learning for EEG applications. We hope this clarification highlights the unique advantages of EEGMamba and addresses your query effectively. \\n\\n**Q2: How does the ST-Adaptive module capture interactions between channels and temporal features, which are critical in EEG data? Have you considered modeling these dependencies more explicitly?** \\n**A2:** The ST-Adaptive module sequentially extracts the spatial features and temporal features of the EEG signals, without direct interactions. First, it uses a spatially adaptive convolution method to normalize EEG data to a fixed number of channels, enabling the model to capture channel interactions despite spatial diversity across datasets and tasks. Then the extracted channel features are input into the dual path structure in the tokenizer, which includes both small and wide kernel convolutions, allows the model to capture local and global temporal features. The resulting token naturally contains channel features and time features, and does not require additional spatio-temporal interaction.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for your clarification.\", \"i_still_have_the_following_concerns\": \"1. As also noted by reviewer i9VP, more recent methods such as LaBraM [1] and the Brant series [2][3][4] seem to be more appropriate baselines for multitask comparison. While the authors propose a specific definition of multitask models, methods like [1][2][3][4] address multitask learning, which is more consistent with the standard multitask learning paradigm. Comparing single-task models on unrelated tasks (e.g., testing a sleep model on emotion classification) introduces fairness issues and undermines the evaluation. If the evaluation is focused on a specific task, such as sleep stage classification or emotion recognition, comparisons should involve state-of-the-art models specifically designed for that task to ensure a fair and meaningful assessment. Furthermore, some of the baselines used in the paper, such as EEGNet [5] and AttnSleep [6], published in 2018 and 2021, are outdated and may not adequately represent the current state of the field. \\n\\n2. While SEEG (or other physiological signals) and EEG differ in modalities, they both fall under the broader domain of neural signal analysis, particularly brain signal processing. There is no inherent reason why methods developed for SEEG cannot be adapted for EEG tasks and achieve good performance. Excluding these methods without experimental validation or empirical evidence diminishes the robustness of the argument.\\n\\n3. Regarding the \\\"quantitative evidence\\\" provided for computational efficiency, it is important to include concrete metrics such as training time and inference time, supported by experimental results. Generalized statements, without detailed evidence, are insufficient to substantiate claims of efficiency advantages.\\n\\nI will reconsider my assessment after reviewing the authors' further response.\\n\\n[1] Jiang, W. B., Zhao, L. M., & Lu, B. L. (2024). Large brain model for learning generic representations with tremendous EEG data in BCI. arXiv preprint arXiv:2405.18765.\\n\\n[2] Zhang, D., Yuan, Z., Yang, Y., Chen, J., Wang, J., & Li, Y. (2024). Brant: Foundation model for intracranial neural signal. Advances in Neural Information Processing Systems, 36.\\n\\n[3] Yuan, Z., Zhang, D., Chen, J., Gu, G., & Yang, Y. (2024). Brant-2: Foundation Model for Brain Signals. arXiv preprint arXiv:2402.10251.\\n\\n[4] Zhang, D., Yuan, Z., Chen, J., Chen, K., & Yang, Y. (2024, August). Brant-X: A Unified Physiological Signal Alignment Framework. Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 4155-4166.\\n\\n[5] Lawhern, V. J., Solon, A. J., Waytowich, N. R., Gordon, S. M., Hung, C. P., & Lance, B. J. (2018). EEGNet: a compact convolutional neural network for EEG-based brain\\u2013computer interfaces. Journal of neural engineering, 15(5), 056013.\\n\\n[6] Eldele, E., Chen, Z., Liu, C., Wu, M., Kwoh, C. K., Li, X., & Guan, C. (2021). An attention-based deep learning approach for sleep stage classification with single-channel EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 809-818.\"}",
"{\"title\": \"Reply to Q2\", \"comment\": \"**Q2. section 2.2 ST-Adaptive Module:**\\n**Q2.a: Scalability to New Datasets.** \\n**A2.a:** Our model can be extended to new datasets after training. We can describe the specific extension from two different situations as follows: \\n- When the newly added dataset has the same number of tasks, channels and classes as the previous dataset, we can directly use the task index of the original dataset. For example, the two datasets SleepEDF20 and SHHS used in the manuscript experiment can be replaced with each other, that is, if you want to apply the model trained on SleepEDF20 to SHHS, you only need to encode the same task and make a few epochs of fine-tuning. \\n- When the newly added dataset cannot meet the conditions in 1, we need to give it a new task number and pre-set its number of channels and classes, and then a few epochs of fine-tuning. This is broadly similar to what most current foundation models do, except that we need to pre-set the task number and the number of channels. \\n\\nWe use the Confused student EEG brainwave data [1] (hereinafter referred to as Confused EEG), which is a completely new task for EEGMamba. We applied the existing weights (i.e., the weights corresponding to Table 2-5 in the manuscript) to the Confused EEG, using 4 different random numbers to obtain a 7:3 training-test set ratio, similar to the approach in [3] (while [2] used five-fold cross-validation). We trained for 10 epochs each time and averaged the results. The test results are shown as follows: \\n| Classification Network | [2], 2019 | [3], 2023 | EEGMamba |\\n|:------------------------:|:---------:|:---------:|:--------:|\\n| Reported Accuracy | 0.7500 | 0.7670 | 0.7825 |\\n\\n[1] Wang H, Li Y, Hu X, et al. Using EEG to Improve Massive Open Online Courses Feedback Interaction[C]//AIED workshops. 2013. \\n[2] Wang H, Wu Z, Xing E P. Removing confounding factors associated weights in deep neural networks improves the prediction accuracy for healthcare applications[C]//Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing. NIH Public Access, 2019, 24: 54. \\n[3] Lim Z Y, Neo Y L. Confused vs Non-Confused Electroencephalography Signal Classification Using Deep Learning Algorithm[C]//2023 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS). IEEE, 2023: 195-200. \\nIn addition to this, we need to state that the starting point of our model design is an end-to-end multi-task classification model. \\n\\n**Q2.b: Tokenize Layer.** \\n**A2.b:** The choice of kernel sizes in the tokenize layer is mainly based on the methods of existing papers and our own experimental results. We refer to the MRCNN module in AttnSleep [1], which has two branches of different kernel sizes in order to enable the model to learn features at multiple scales simultaneously. The small kernel captures local, high-frequency details, while the wide kernel captures more global, low-frequency patterns. This dual-path approach is common in many EEG signal processing architectures [2][3]. \\nTherefore, we adopt the same structural design and selected the final convolution kernel size after a series of ablation studies. In these studies, we tested a variety of kernel sizes and found that the sizes now employed in the paper provide a good balance between capturing fine-grained and broader spatial features in EEG signals. \\n\\n[1] Eldele E, Chen Z, Liu C, et al. An attention-based deep learning approach for sleep stage classification with single-channel EEG[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2021, 29: 809-818. \\n[2] Zhu H, Zhou W, Fu C, et al. Masksleepnet: A cross-modality adaptation neural network for heterogeneous signals processing in sleep staging[J]. IEEE Journal of Biomedical and Health Informatics, 2023, 27(5): 2353-2364. \\n[3] Zhu H, Wang L, Shen N, et al. MS-HNN: Multi-scale hierarchical neural network with squeeze and excitation block for neonatal sleep staging using a single-channel EEG[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 2195-2204.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for the authors' response.\\n\\nHowever, I still have concerns regarding the experimental comparisons, and the authors' clarification does not fully address these concerns.\\n\\nAfter carefully reviewing all rebuttals, I have decided to maintain my original ratings.\"}",
"{\"comment\": \"Thank you for the clarification. Based on the additional information provided by the authors, I have decided to raise the score to 6. The paper introduces some novel and intriguing model design ideas, such as the use of task tokens to specify downstream tasks and the simultaneous learning with multiple datasets. However, the following concerns remain:\\n\\n1. Gap Between Offline Analysis and Real-Life Application\\nWhile the use of task tokens during offline analysis is conceptually straightforward\\u2014since the researcher knows which task (or dataset) is being evaluated\\u2014the applicability in real-life scenarios remains unclear. For instance, in a hypothetical situation involving simultaneous seizure and emotion monitoring alongside the control of a motor-imagery-based BCI (although admittedly unrealistic, this scenario reflects the diversity of the benchmarked datasets in this study), it is uncertain how task tokens would be assigned in practice. In real-world applications, where the timing and nature of a task are not explicitly known, task token assignment could pose a significant challenge. While it is theoretically possible to assign multiple task tokens simultaneously and generate corresponding outputs, the potential advantages of this approach over traditional methods have not been adequately demonstrated.\\n\\n2. Relevance of Long Sequence Modeling in EEG\\nThe utility of long sequence modeling in EEG analysis appears to be task-specific. For applications such as sleep stage classification and epilepsy detection, long sequence modeling is relevant. However, for other tasks, such as motor imagery, the proposed model may not necessarily outperform existing approaches, such as the EEG Conformer. The benefits of multi-task learning for such cases remain less evident.\\n\\n3. Lack of Model Interpretability\\nThe paper does not sufficiently address the interpretability of the proposed model. It would be highly valuable to analyze what has been learned across datasets in the context of multi-task learning and how this differs from traditional single-dataset approaches. Such an analysis could provide deeper insights into the generalization capabilities of the model and reveal the specific types of EEG features it captures. These insights could significantly enhance the understanding of the model's contributions to EEG analysis.\"}",
"{\"title\": \"Responses and thanks\", \"comment\": \"We would like to sincerely thank the reviewer uVYA for insightful comments and for raising important points regarding the applicability of our proposed model. We appreciate the time and effort taken to evaluate our work, and we have carefully considered the feedback provided:\\n1. We agree that while task tokens are straightforward in offline analysis, real-world application is more complex. Assigning task tokens in scenarios where task timing is not explicit could indeed be challenging. We sincerely thank the reviewers for providing such insights to help us further improve the manuscript. In future work, we plan to explore dynamic assignment of task tokens during real-time inference and investigate strategies for handling such complexities in practical settings, especially in multi-task environments. \\n2. We acknowledge that long sequence modeling is more relevant for certain tasks (e.g., sleep stage classification, epilepsy detection). For tasks like motor imagery, the benefits of long sequence modeling are less clear. Moving forward, we will conduct additional experiments comparing our approach with models like EEG Conformer for different EEG tasks to better understand where our method provides clear advantages.\\n3. We appreciate the reviewer\\u2019s concern about model interpretability. To address this, we plan to include feature visualizations in future revisions to provide deeper insights into the learned representations and how multi-task learning influences generalization across different datasets. This will help improve the interpretability of the model and its application in real-world settings. \\n\\nOnce again, we would like to thank the reviewer for their thoughtful and constructive comments. We believe that incorporating these suggestions will significantly strengthen the paper and provide a clearer understanding of the potential and limitations of our approach.\"}",
"{\"summary\": \"The paper \\\"EEGMAMBA: Bidirectional State Space Model with Mixture of Experts for EEG Multi-Task Classification\\\" presents EEGMamba, a multi-task learning framework for EEG classification tasks, addressing challenges related to signal length and channel variability. EEGMamba integrates a Spatio-Temporal-Adaptive (ST-Adaptive) module, bidirectional Mamba blocks, and a task-aware Mixture of Experts (MoE) to enhance adaptability and task-specific processing across diverse EEG datasets. The ST-Adaptive module standardizes data of various lengths and channel numbers, while the bidirectional Mamba captures temporal dependencies in EEG sequences, and the task-aware MoE module selects experts based on task, enhancing classification accuracy and generalization. Tested on eight datasets spanning four task types (seizure detection, emotion recognition, sleep stage classification, and motor imagery), EEGMamba achieves state-of-the-art results, demonstrating superior efficiency, memory usage, and accuracy across tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"EEGMamba achieves superior memory efficiency and faster inference speed than traditional transformer-based models, especially on longer EEG sequences, thus proving its practical value.\\n\\nThe proposed model demonstrates an approach to handling EEG data of varying lengths and channels, incorporating a class token for temporal adaptability and a task-aware MoE to distinguish task-specific features.\", \"weaknesses\": \"1. Limited Contribution: The author claims that EEGmamba is the first universal EEG classification network to effectively implement multi-task learning for EEG applications. However, several established methods are available that can be applied to multi-task EEG learning, as cited in [1][2][3].\\n\\n2. Inappropriate Comparisons: The choice of baselines for comparison lacks relevance. For instance, using AttnSleep [3] as a baseline for seizure detection/emotion recognition is incongruous, as it is specifically designed for sleep staging. Additionally, the author does not include multi-task learning methods as baselines, instead comparing against smaller models tailored to individual tasks. For fair assessment, each specific task should be compared to the most relevant model designed for that purpose. To evaluate cross-task capabilities, comparisons should involve multi-task learning methods like those in [1][2][3][5][6].\\n\\n3. Efficiency Evaluation: The author should provide quantitative evidence demonstrating the proposed method's efficiency advantages over previous approaches.\\n\\n\\n\\n[1] Jiang, W., Zhao, L., & Lu, B. L. Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI. In The Twelfth International Conference on Learning Representations.\\n\\n[2] Chen, Y., Ren, K., Song, K., Wang, Y., Wang, Y., Li, D., & Qiu, L. (2024). EEGFormer: Towards transferable and interpretable large-scale EEG foundation model. arXiv preprint arXiv:2401.10278.\\n\\n[3] Jiang, W. B., Wang, Y., Lu, B. L., & Li, D. (2024). NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals. arXiv preprint arXiv:2409.00101.\\n\\n[4] Eldele, E., Chen, Z., Liu, C., Wu, M., Kwoh, C. K., Li, X., & Guan, C. (2021). An attention-based deep learning approach for sleep stage classification with single-channel EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 809-818.\\n\\n[5] Zhang, D., Yuan, Z., Yang, Y., Chen, J., Wang, J., & Li, Y. (2024). Brant: Foundation model for intracranial neural signal. Advances in Neural Information Processing Systems, 36.\\n\\n[6] Wang, C., Subramaniam, V., Yaari, A. U., Kreiman, G., Katz, B., Cases, I., & Barbu, A. (2023). BrainBERT: Self-supervised representation learning for intracranial recordings. arXiv preprint arXiv:2302.14367.\", \"questions\": \"Plz go and check weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
13G5KXm98a | Voronoi Tessellation-based Confidence Decision Boundary Visualization to Enhance Understanding of Active Learning | [
"Jie Chen",
"Siteng Ma",
"Dairui Liu",
"Brian Mac Namee",
"Ruihai Dong"
] | The current visualizations used in active learning fail to capture the cumulative effect of the model in the active learning process, making it difficult for researchers to effectively observe and analyze the practical performance of different query strategies.
To address this issue, we introduce the confidence decision boundary visualization, which is generated through Voronoi tessellation and evaluated using ridge confidence. This allows better understanding of selection strategies used in active learning. This approach enhances the information content in boundary regions where data distribution is sparse. Based on the confidence decision boundary, we created a series of visualizations to evaluate active learning query strategies. These visualizations capture nuanced variations regarding how different selection strategies perform sampling, the characteristics of points selected by various methods, and the impact of newly sampled points on the model. This enables a much deeper understanding of the underlying mechanisms of existing query strategies. | [
"Decision Boundary",
"Visualization",
"Active Learning"
] | Reject | https://openreview.net/pdf?id=13G5KXm98a | https://openreview.net/forum?id=13G5KXm98a | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"v9ZCXEhhXm",
"uMCyjtnV4H",
"gOzIq3KpYd",
"ebdDSIPrWi",
"cmC9Pa6ywc",
"ZJL2aryy9F",
"Xm6KA2ihlP",
"WnXJzV8dLQ",
"QAjce4HSRy",
"PaqMv5e8Cs",
"MxgZ4yawIR",
"MWmHKSZrnK",
"M8bwBARCEb",
"IBF67ZiEK1",
"Fuo49swR3Y",
"EEYpErFtQ2",
"DVyFnUFOYV",
"1dgWsvkGYk"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730564020976,
1732597797543,
1732593248585,
1730569442696,
1732596366570,
1732595923700,
1732614309199,
1732594908644,
1730572109413,
1732964067032,
1732858169372,
1734676159974,
1732594409955,
1730554820580,
1737524178180,
1732597333905,
1732955705209,
1732708650657
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12297/Reviewer_8SHQ"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Reviewer_Vpbn"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Reviewer_rwsQ"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Reviewer_kBv3"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Reviewer_kBv3"
],
[
"ICLR.cc/2025/Conference/Submission12297/Area_Chair_Wqf6"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Reviewer_rwsQ"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12297/Reviewer_Vpbn"
],
[
"ICLR.cc/2025/Conference/Submission12297/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces the confidence decision boundary visualization method, which provides an interpretable comparison of various active learning strategies, yielding valuable insights. The experimental design and discussion are thorough.\\nI\\u2019m glad to see that some work at the ICLR conference focuses on data visualization. The insights provided by data visualization can better help users understand some of the underlying aspects behind the models. Additionally, visualization on the existing dataset is able to provide better insights and differentiation capabilities.\\n\\nOne aspect I am particularly concerned about is the scalability of this visualization. Currently, the data volume is relatively small, but with a large number of categories, how can the scalability of this visualization be ensured? I would suggest that the authors provide more discussion and experimental results on this point. When the dataset is large, the entire Voronoi becomes complex, with very small cells and densely packed ridges that obscure each other, making it visually unfriendly. I doubt whether users other than authors can derive similar insights from such dense and non-interactive visualizations. The authors could provide quantitative and qualitative user surveys to demonstrate the effectiveness and usability of their method. The authors could consider further optimizations, such as clustering before segmentation or refining the segmentation rules, rather than just using Voronoi tessellation.\\nThe visualization method seems to lack interactivity. Providing an interactive tool would enhance the appeal of this work. Additionally, the authors do not explicitly mention which dimensionality reduction method was used. Different dimensionality reduction techniques may affect subsequent Voronoi and decision boundary construction. The authors should provide explanations and evaluations of this aspect.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I\\u2019m glad to see that some work at the ICLR conference focuses on data visualization. The insights provided by data visualization can better help users understand some of the underlying aspects behind the models. Additionally, visualization on the existing dataset is able to provide better insights and differentiation capabilities.\", \"weaknesses\": \"One aspect I am particularly concerned about is the scalability of this visualization. Currently, the data volume is relatively small, but with a large number of categories, how can the scalability of this visualization be ensured? I would suggest that the authors provide more discussion and experimental results on this point.\", \"questions\": \"One aspect I am particularly concerned about is the scalability of this visualization. Currently, the data volume is relatively small, but with a large number of categories, how can the scalability of this visualization be ensured? I would suggest that the authors provide more discussion and experimental results on this point. When the dataset is large, the entire Voronoi becomes complex, with very small cells and densely packed ridges that obscure each other, making it visually unfriendly. I doubt whether users other than authors can derive similar insights from such dense and non-interactive visualizations. The authors could provide quantitative and qualitative user surveys to demonstrate the effectiveness and usability of their method. The authors could consider further optimizations, such as clustering before segmentation or refining the segmentation rules, rather than just using Voronoi tessellation.\\nThe visualization method seems to lack interactivity. Providing an interactive tool would enhance the appeal of this work. Additionally, the authors do not explicitly mention which dimensionality reduction method was used. Different dimensionality reduction techniques may affect subsequent Voronoi and decision boundary construction. The authors should provide explanations and evaluations of this aspect.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"### **Q3: What specific contributions does it provide in terms of understanding boundary effectiveness that traditional methods do not?**\\n\\nThere are very few visualization methods specifically designed for active learning (AL). Most existing approaches provide only basic visualizations accompanying the proposal of certain query strategies, such as visualizing the distribution of queried samples for comparison or explanation purposes [1, 2, 3]. Alternatively, some visualizations depict the impact of training samples on the decision boundary of simple models to demonstrate the sampling process and particular scenarios [4].\\n\\nSimilarly, there are also not many decision boundary visualization methods with general applicability. For example, in active learning, existing methods often provide only the changes in the decision boundary over time but fail to jointly reflect the relationships between the training data and the decision boundary, as well as the connections between the decision boundary and the sampling data selected by the model. In contrast, our method can visualize these relationships, enabling a deeper investigation into active learning query strategies.\\n\\nMoreover, while dimensionality reduction for visualization inevitably introduces spatial distortion, our proposed confidence decision boundary captures the relationship between the predicted ridges (which collectively form the decision boundary) in the 2D visualization and their corresponding classified points in the high-dimensional space. Unlike traditional approaches that treat the decision boundary uniformly, our method incorporates ridge confidence to highlight the varying levels of complexity and uncertainty in different regions of the boundary.\\n\\n#### References\\n\\n[1] Agarwal, Sharat, et al. \\\"Contextual diversity for active learning.\\\" *Computer Vision\\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\\u201328, 2020, Proceedings, Part XVI 16.* Springer International Publishing, 2020. \\n[2] Pinsler, Robert, et al. \\\"Bayesian batch active learning as sparse subset approximation.\\\" *Advances in neural information processing systems 32* (2019). \\n[3] Liu, Zhuoming, et al. \\\"Influence selection for active learning.\\\" *Proceedings of the IEEE/CVF international conference on computer vision.* 2021. \\n[4] Tharwat, Alaa, and Wolfram Schenck. \\\"A survey on active learning: State-of-the-art, practical challenges and research directions.\\\" *Mathematics* 11.4 (2023): 820.\\n\\n\\n### **Q4: Omission of relevant literature**\\n\\nWe appreciate you pointing out the omission of relevant literature. While the referenced paper does use Voronoi, it does not apply it to AL, which is why we initially did not consider it. \\n\\nIn the revision, we will include this reference and discuss its connection to our approach.\"}",
"{\"comment\": \"We sincerely appreciate the reviewer **kBv3** for the comment.\\n\\n### **Q1: Lack of related work and over length of section 3**\\n\\nThere are very few visualization methods specifically targeting **active learning (AL)**, and most are limited to basic visualizations accompanying the proposal of query strategies. Generalizable decision boundary visualization methods are also scarce.\\n\\nDue to the limited page length required by ICLR and based on the format of similar ICLR papers in previous years (for instance, Jenssen, Robert. \\\"MAP IT to Visualize Representations.\\\" *The Twelfth International Conference on Learning Representations*), we integrated the related work section into the introduction section.\\n\\nRegarding the issue you mentioned about Section 3 being too long, we will restructure the paper in the revised version, splitting the original Subsections 3.1 and 3.2 into two independent sections.\\n \\n \\n### **Q2: Lack of baselines for visualization methods**\\n\\nQuantitative comparison is indeed a very good suggestion. However, the focus of this paper is to explore the sampling behaviors and results of different query strategies, leading to conclusions that cannot be derived through other visualization methods or direct comparisons. Different visualization methods emphasize different aspects. In the introduction section, we discussed various existing visualization methods and their applicability to active learning.\\n\\n\\n### **Q3: Construction of the 2D feature map**\\n\\nIn lines 208-232 of the paper, we describe the features used to generate the Voronoi diagram. In fact, we have two ways of constructing a 2D feature map:\\n\\n1. **Dynamically Updated Features**: These features are updated during active learning (e.g., Figure 2). They are extracted by the trained model at each round on the pool data and test data, with the dimensionality-reduced results serving as Voronoi points for constructing our visualization.\\n\\n2. **Fixed Features for Posterior Analysis**: These features are used for posterior analysis (e.g., Figures 5, 6, and 7). Because we need to observe and compare the sampling behaviors and results of different query strategies, we focus on analyzing the confidence decision boundary, sampling points (newly queried points), and the impact of new samples on model training (error detection) using a fixed feature map. This is necessary due to the differences introduced by the features extracted by models from different rounds and further variations caused by different dimensionality reduction methods.\\n\\nThus, the second method involves training a **visualization model** (described in Section 3.1.3) using all the pool data to help generate fixed features, which are then used for the feature map in every round. The reason for using the visualization model is explained in lines 211-215 of the paper.\"}",
"{\"summary\": \"The proposed work deals with the challenging issue of getting insights into sampling strategies of active learning. Addressing this issue, the authors developed a novel visualization approach: The feature space of the samples is projected to a 2D plane and segmented into Voronoi cells. Ridges of the Voronoi cells are used to construct the decision boundary. Additionally, the confidence of this decision boundary is assessed by leveraging predicted probabilities of samples for each ridge. Thus, the confidence of the decision boundary varies locally. The authors apply the confidence decision boundary to visualize the learning and sampling behavior of eight active learning strategies on MNIST and CIFAR-10. Using various other visualizations, they qualitatively depict the characteristics of the sampling strategies and conclude that uncertainty caused by insufficient training samples may be effectively tackled by sampling. In contrast, uncertainty in noisy regions may be hard to tackle.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper sheds light on the crucial topic of getting insights into machine learning algorithms and provides visual results that highlight apparent differences between sampling strategies.\", \"The chosen approach of this paper is generalizable to various machine learning models performing prediction tasks.\", \"The authors provide extensive visualizations for each sampling strategy and compare their characteristics with the help of side-by-side visualizations.\", \"The paper examines a high number of sampling strategies.\"], \"weaknesses\": [\"The paper contains errors in writing. For example, the sentence \\\"Since points on either side of the predicted ridges belong to different predicted classes, the confidence of the predictions vary, different sections of the decision boundary carry varying degrees of informative value due to differences in prediction confidence.\\\" in lines 110-113 should be corrected.\", \"Visualizations lack proper color encoding. For example, the decision boundary, a continuous value, is represented with a qualitative color scale (see Figure 5).\", \"It remains unclear which snapshots of the feature space were taken for visual elaboration. For example, it is not described why the third round of training was used to inspect the decision boundary in Figure 5. Thus, the authors' conclusions are only based on single visual findings, making them hard to retrace.\", \"A clear methodology for selecting snapshots for inspection would improve the clarity and usefulness of the authors' approach. Furthermore, the evaluation relies on observations of a single training run - a more extensive evaluation with a higher number of training runs per sampling strategy would be a good starting point.\", \"Some visualizations lack proper annotations, making them hard to understand. Figure 4 especially needs annotations on what the different colors encode. Similarly, which training rounds do we see in Figure 2?\"], \"questions\": [\"It is unclear why one can assess the confidence of the predicted ridges with the representative Voronoi center points $\\\\mathbf{p_i}$ and $\\\\mathbf{p_j}$ (equation 2). Although each ridge can be represented by its representative point $\\\\mathbf{p}$, the predicted probabilities close to the decision boundary may differ strongly from their center. Could you please clarify why using the point $\\\\mathbf{p}$ gives an accurate prediction of the confidence?\", \"How did you decide on which snapshots to visualize?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"### **Q3: Effect of different dimensionality reduction methods on Voronoi and decision boundary**\\n\\nWe believe the point you raised is highly valuable, especially in the context of low-dimensional visualizations. Different dimensionality reduction methods result in varying degrees of information loss, which can cause spatial distortion in our visualization results and, consequently, affect the decision boundary constructed in the reduced dimensions. However, this spatial distortion is an inherent and objective issue, and its impact is limited according to the experimental results. Our visualization method does not rely on any specific dimensionality reduction technique, and exploring the effect caused by different dimensionality reduction methods across various datasets is not the core focus of this work. However, in the revision, we will include an analysis of the effect caused by the dimensionality reduction technique currently employed (t-SNE) in our method.\", \"we_used_two_different_methods_for_evaluation\": \"1. **Correlation Test** [1, 2]:\\n This method evaluates the Pearson correlation between the pairwise similarity of the features extracted by the model and the pairwise distance matrix calculated from the dimensionality-reduced data. The results are shown in the table below:\\n\\n | |Pearson Correlation|\\u2191 | EntropySampling (Avg) | RandomSampling (Avg) | Visualization Model |\\n |--------------------------|-----------------------|-----------------------|---------------------|\\n | **MNIST** | 0.6307 | 0.6630 | 0.5951 |\\n | **CIFAR-10** | 0.5049 | 0.4615 | 0.5166 |\\n\\n Since sampling occurs over multiple rounds, we calculated the average values. The absolute value of Pearson Correlation closer to 1 indicates a higher correlation, while values closer to 0 indicate no correlation. The results indicate that the impact of spatial distortion caused by dimensionality reduction is relatively small.\\n\\n2. **Local Structure Preservation** [3]:\\n We trained and tested an SVM on the same training data as in each round of the AL process, but used the corresponding dimensionality-reduced 2D features for visualization instead of the original data, and compared its accuracy with that of the model in the AL process trained on the original data. Additionally, we evaluated the results of a 1-NN classifier on the 2D test set. The results are as follows:\\n#### - Accuracy:\\n | Round | AL RandomSampling (CIFAR-10) | SVM | 1-NN |\\n |-------|-------------------------------|---------|---------|\\n | 0 | 0.3399 | 0.4186 | 0.6467 |\\n | 1 | 0.5191 | 0.8096 | 0.8817 |\\n | 2 | 0.6722 | 0.8931 | 0.8768 |\\n | 3 | 0.7401 | 0.9080 | 0.8897 |\\n | 4 | 0.7797 | 0.9269 | 0.9044 |\\n | 5 | 0.8167 | 0.9246 | 0.8985 |\\n | 6 | 0.8235 | 0.9288 | 0.9182 |\\n | 7 | 0.8434 | 0.9320 | 0.9259 |\\n | 8 | 0.8682 | 0.9414 | 0.9253 |\\n | 9 | 0.8814 | 0.9411 | 0.9283 |\\n | 10 | 0.8893 | 0.9380 | 0.9388 |\\n | 11 | 0.8895 | 0.9499 | 0.9379 |\\n\\n It can be observed that both SVM and 1-NN achieve better performance compared to the AL process, indicating that the dimensionality reduction method effectively preserves the local structural characteristics of the original data. More experimental results will be included in the appendix.\\n\\n#### References\\n\\n[1] Smyth, Barry, Mark Mullins, and Elizabeth McKenna. \\\"Picture Perfect: Visualisation Techniques for Case-based Reasoning.\\\" *ECAI*. 2000. \\n[2] Namee, Brian Mac, and Sarah Jane Delany. \\\"CBTV: Visualising case bases for similarity measure design and selection.\\\" *International conference on case-based reasoning*. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. \\n[3] Huang, Haiyang, et al. \\\"Towards a comprehensive evaluation of dimension reduction methods for transcriptomic data visualization.\\\" *Communications Biology* 5.1 (2022): 719.\"}",
"{\"comment\": \"We sincerely appreciate the reviewer **8SHQ** for the comment.\\n\\n### **Q1: Scalability of the visualization method**\\n\\nWe believe that bigger dataset sizes won\\u2019t cause a significant problem for our approach. If the dataset is exceedingly large, a subset of it can be used for visualization. On a smaller dataset that is independently and identically distributed (i.i.d.) with the larger dataset, the decision boundary of the trained model shows relatively small differences compared to the one trained on the larger dataset. We computed the prediction differences on the test set and the overlap of decision boundaries (formed by predicted ridges) on the test set between models trained on smaller datasets of varying sizes and the **Visualization Model (VM)** trained on the whole pool dataset. The experimental results are as follows:\\n\\n#### - Prediction Differences:\\n\\n| Model trained on | KL Divergence\\u2193 | JS Divergence\\u2193 | Wasserstein Distance\\u2193 |\\n|--------------------------|----------------|----------------|------------------------|\\n| **MNIST (1000 samples)** | 0.1890 | 0.0324 | 0.0067 |\\n| **MNIST (5000 samples)** | 0.0515 | 0.0101 | 0.0028 |\\n| **MNIST (10000 samples)**| 0.0362 | 0.0067 | 0.0019 |\\n| **CIFAR-10 (1000 samples)** | 0.1255 | 0.0318 | 0.0121 |\\n| **CIFAR-10 (5000 samples)** | 0.0371 | 0.0094 | 0.0045 |\\n| **CIFAR-10 (10000 samples)**| 0.0177 | 0.0043 | 0.0025 |\\n\\nThe smaller the values of KL Divergence, JS Divergence, and Wasserstein Distance, the more similar the two predicted probability distributions are.\\n\\n#### - Overlapping Predicted Ridges:\\n\\n\\n| Model trained on | Total Predicted Ridges\\u2193 | Overlapping Predicted Ridges (compared to VM)\\u2191 |\\n|--------------------------|----------------|--------------------------------------|\\n| **CIFAR-10 (1000 samples)** | 3728 | 2587 |\\n| **CIFAR-10 (5000 samples)** | 3479 | 3072 |\\n| **CIFAR-10 (10000 samples)**| 3520 | 3233 |\\n| **CIFAR-10 (40000 samples) VM** | 3426 | 3426 |\\n\\n\\n| Model trained on | Total Predicted Ridges\\u2193 | Overlapping Predicted Ridges (compared to VM)\\u2191 |\\n|--------------------------|----------------|--------------------------------------|\\n| **MNIST (1000 samples)** | 2919 | 1224 |\\n| **MNIST (5000 samples)** | 1936 | 1435 |\\n| **MNIST (10000 samples)**| 1880 | 1518 |\\n| **MNIST (50000 samples) VM** | 1799 | 1799 |\\n\\nAdditionally, we will fill different cells with colors to make it easier to observe the changes in regions. In fact, in our paper, we constructed 50,000 Voronoi cells and 40,000 Voronoi cells, respectively.\\n\\nThe real issue is likely to arise when there are many classes. Anything beyond approximately 20 classes becomes very challenging to visualize. To address this, we recommend splitting the task into multiple sub-classification tasks and performing visualization on these smaller subsets. Our approach mainly focuses on exploring the sampling behaviors of classical query strategies and analyzing the impact of newly sampled points on the model\\u2019s training in the subsequent round.\\n\\n### **Q2: Add quantitative and qualitative user surveys**\\n\\nConducting qualitative or quantitative user surveys based on our current visualization exploration of different query strategies in active learning is indeed not feasible for us at this stage. In this paper, we primarily performed qualitative analysis of the visualization results we obtained. \\n\\nHowever, we greatly appreciate your suggestion regarding interactivity\\u2014designing an interactive visualization tool for active learning is indeed part of our future work. When implementing the interactive tool in the next step, we will consider conducting relevant user studies.\"}",
"{\"comment\": \"I appreciate the authors' feedback and the work they've put into addressing my concerns. Although I\\u2019ve increased my grade from 3 to 5, my concerns about using 2D projections to represent high-dimensional decision boundaries still persist. I would be grateful if the authors could further discuss this aspect.\"}",
"{\"comment\": \"We sincerely appreciate the reviewer **Vpbn** for the comment.\\n\\n### **Q1: Writing errors**\\n\\nThank you for pointing out the issues. We will rewrite these relatively complex sentences to achieve better clarity and expression.\\n\\n\\n### **Q2: Visualizations lack proper color encoding**\\n\\nIt is true that the confidence of the decision boundary (represented by all predicted ridges in our visualization results) is a continuous value. However, because there are too many predicted ridges, using continuous values for color mapping would result in similar colors, making it difficult to distinguish high-confidence and low-confidence regions. \\n\\nUsing intervals to assign colors creates a stronger contrast effect, which enhances observation clarity.\\n\\n\\n### **Q3: Snapshots selection and the need for a clear methodology for selecting snapshots for inspection**\\n\\nWe did not visualize snapshots of specific rounds but instead visualized the entire active learning process. The examples shown in the main text are only a part of the results. We will organize these visualizations and include them in the appendix of the revised version.\\n\\nFigure 5 selects the third round because the second row of Figure 5 shows the cumulative sampling points from rounds 1 to 5. Therefore, we chose the decision boundary of the middle round (round 3) for display. As a result, the third row shows the error detection of round 4 to observe the impact of the points selected by the model trained in round 3 on the training of the model in the next round.\\n\\n\\n### **Q4: Why one can assess the confidence of the predicted ridges with the representative Voronoi center points Pi and Pj (equation 2). Although each ridge can be represented by its representative point P, the predicted probabilities close to the decision boundary may differ strongly from their center.**\\n\\nYou may have misunderstood the Voronoi point \\\\( P \\\\), the Voronoi cell, and the ridge. According to the properties of Voronoi tessellation, the Voronoi point \\\\( P \\\\) can represent all points that fall within its associated Voronoi cell (see lines 119-120 of the paper). Moreover, \\\\( P_i \\\\) and \\\\( P_j \\\\) are the two closest points from our existing point set to the ridge.\\n\\nThe situation you mentioned, where the point closest to the predicted ridge may be very different from the representative point of its Voronoi cell, can indeed occur. However, the Voronoi cell can be regarded as a collection of similar points, and the representative Voronoi point can represent all the points falling within its associated cell based on the nearest neighbor relationship (for details, see Chapter 7 of *Computational Geometry: Algorithms and Applications*) [1].\\n\\nIn our experiments, the Voronoi points correspond to the points already learned by the AL model, while the points within the cells are those from the pool dataset that the model has not yet learned. Therefore, this situation does not affect our evaluation of the confidence of the ridge dividing two different-class cells.\\n\\n#### Reference\\n[1] De Berg, Mark. *Computational geometry: algorithms and applications*. Springer Science & Business Media, 2000.\"}",
"{\"summary\": \"This paper introduces a Voronoi tessellation-based visualization method for active learning (AL) to address the limitations of traditional visualizations, which often lack detail in uncertain or sparse boundary areas. By using a ridge confidence metric with Voronoi cells, the approach offers a clearer, more informative view of decision boundaries, aiding in analyzing different AL sampling strategies. Tested on MNIST and CIFAR-10 datasets, the method reveals unique sampling behaviors across entropy, margin, and diversity-based strategies, helping clarify how each handles uncertainty and impacts model learning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. By using Voronoi tessellation and introducing a ridge confidence metric, the paper provides a more detailed and interpretable visualization of decision boundaries, offering insights into the complexity and uncertainty of boundary regions.\\n1. The authors compare multiple AL sampling methods (e.g., entropy, margin, BALD dropout, KMeans) on distinct datasets, providing valuable insights into the behaviors and trade-offs of each approach in different scenarios.\\n1. The visualization effectively demonstrates how models handle uncertainty due to limited training samples and noisy regions, which is beneficial for identifying optimal query strategies for different types of uncertainties.\\n1. While focused on active learning, the proposed visualization technique has potential utility in other areas of machine learning that require understanding complex decision boundaries in high-dimensional spaces.\", \"weaknesses\": \"1. This paper lacks a review of related work. It's important to examine the literature in the related fields, especially other visualization methods developed for active learning.\\n1. While the work in the paper is valuable, it also lacks of the baseline methods. The paper demonstrates the effectiveness of Voronoi tessellation and ridge confidence in active learning throught several case studies, but it does not prove how much better it is compared with other visualization methods. I suggest a quantitative analysis with different visualization methods, like a user study.\\n1. Section 3 is over lengthy. The authors should consider breaking it down. Subsection 3.1 and Subsection 3.2 should be independent sections.\\n1. The discussion of the 2D feature map is missing. How much does space distortion affect the Voronoi tesselletion construction? The neighbors in the original space are not always the same in the feature space. How reliable is the decision boundary in this case? Since everything is visualized in the feature map, it's essential to give a comprehensive discussion on the feature map itself.\\n1. The visualization strategies are informative, but the implementation in the plots are really hard to follow. I have several suggestions for improvement:\\n - Figure 2, 3, 5, 6, 7, and 8 are visualization results, but they are too small and dense to read, which is fatal for a visualization paper. At least the authors should put high resolution images in the appendix.\\n - In figure 5, the colormap of confidence interval should be different from that of the classes. I suggest different levels of gray to show the confidence interval\\n - In the visualization plots, the scatter points, Voronoi ridges, and cells are crowded and overlapping, causing severe visual clutter. Actually, it's not necessary to show all the information in a single plot at once. It only makes the plot massive. One way to display a large quantity of information is to enable user interactions. For example, enable users to choose classes they are interested in, add a slider to show different levels of confidence, brush to zoom in on a decision boundary. \\n - The Error Detection visualization results are interesting. It gives more information on how the active learning model behaves. I suggest putting multiple training iterations of Error Detection plots in the appendix to visualize its evolution over time. \\n\\nWhile I believe this work makes good contribution, I'm afraid the authors cannot resolve my concerns in a reasonable time. Therefore, I recommend a weak reject.\", \"questions\": \"I don't quite understand how the 2D feature map is constructed. It is said that it directly comes from the neural network model, but how? Could the authors provide more details?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We appreciate the reviewer **Vpbn's** feedback and understand the concerns regarding the potential arbitrariness of interval boundaries. However, we chose to use interval-based color encoding after carefully considering and testing different approaches, including a continuous color scale. Our findings are as follows:\\n\\n1. **Challenges with Continuous Color Encoding:** \\n While continuous color encoding can theoretically reflect the similarity between values, in our specific context, it presented challenges. Across multiple rounds, the confidence levels of ridges showed only marginal overall improvements, resulting in subtle variations that were difficult to distinguish when using a continuous color scale. This approach made it harder to observe the between-rounds changes in decision boundary confidence clearly, as the lack of contrast reduced the visualization's ability to highlight key differences between rounds.\\n\\n2. **Advantages of Interval-Based Encoding:** \\n To address the above challenge, we introduced interval-based encoding with a 0.1 step size, enhancing the visualization's clarity and making it easier to compare decision boundary changes between rounds. Through extensive testing, we found that finer intervals (e.g., 0.05) introduced excessive granularity, complicating the analysis. Conversely, coarser intervals (e.g., 0.2) oversimplified the data, reducing the effectiveness of the visualization. A 0.1 interval struck the optimal balance.\\n\\nWhile any interval-based approach may introduce some degree of boundary sensitivity, our choice of 0.1 intervals was not arbitrary. It was informed by repeated experimentation and evaluation to ensure it supported effective visualization and interpretation of the decision boundary's confidence levels.\"}",
"{\"comment\": \"I appreciate the authors\\u2019 response and their thorough analysis of spatial distortion, which has effectively addressed my concerns. However, as a visualization paper, it is crucial to ensure that the visualization scheme is efficient. I strongly recommend that the authors revisit their design goals and refine the visualization tools to enhance their usability and clarity. Based on this, I would raise the grade from 5 to 6.\"}",
"{\"metareview\": \"**(a) Summary**\\n\\nThis paper presents a novel visualization approach for active learning using Voronoi tessellation and a ridge confidence metric. The method projects high-dimensional feature spaces onto 2D planes and constructs decision boundaries using Voronoi cell ridges. The authors apply this visualization technique to analyze eight different active learning sampling strategies on MNIST and CIFAR-10 datasets, revealing distinct sampling behaviors and their effectiveness in handling different types of uncertainties.\\n\\n**(b) Strengths**\\n- Provides detailed and interpretable visualization of decision boundaries through Voronoi tessellation\\nEffectively demonstrates model behavior in handling different types of uncertainties\\n- Successfully provides insights into ML models through comprehensive comparison of sampling strategies\\n\\n**(c) Weaknesses**\\n- Limited literature review and baseline comparisons\\n- Lack of user studies to validate visualization effectiveness\\n- Scalability challenges with larger datasets\\n- Suboptimal visualization design choices, particularly in color encoding\\n\\n**(d) Decision factors** \\n\\nWhile the paper presents an interesting visualization approach for active learning, several key issues remain unresolved. The visualization design choices need refinement, particularly in color encoding and scalability. The lack of user studies and baseline comparisons makes it difficult to validate the method's practical utility. While we acknowledge the potential value of this approach, these limitations lead to a recommendation for rejection.\", \"additional_comments_on_reviewer_discussion\": \"Although the authors provided detailed responses regarding spatial distortion analysis, two reviewers (kBv3 and Vpbn) remained unconvinced about the fundamental visualization design choices, particularly the color encoding schemes and representation of continuous values. While two reviewers (kBv3 and rwsQ) acknowledged the improvements and raised their score, the consensus was that the visualization design and experimental evaluation require significant refinement, especially in terms of usability and clarity, before it can effectively serve as a visualization tool.\"}",
"{\"comment\": \"### **Q4: Effects of spatial distortion on Voronoi tessellation construction**\\n\\nWe believe the point you raised is highly valuable, and a comprehensive discussion about the feature map itself is indeed constructive. The impact of spatial distortion is an inherent and objective issue, though its effects are limited according to the experimental results. Our visualization method does not rely on any specific dimensionality reduction technique, and exploring the spatial distortion introduced by different dimensionality reduction methods across various datasets is not the core focus of this work. However, in the revision, we will include an analysis of the spatial distortion caused by the dimensionality reduction technique currently employed (t-SNE) in our method.\", \"we_used_two_different_methods_for_evaluation\": \"1. **Correlation Test** [1, 2]:\\n This method evaluates the Pearson correlation between the pairwise similarity of the features extracted by the model and the pairwise distance matrix calculated from the dimensionality-reduced data. The results are shown in the table below:\\n\\n | |Pearson Correlation|\\u2191 | EntropySampling (Avg) | RandomSampling (Avg) | Visualization Model |\\n |--------------------------|-----------------------|-----------------------|---------------------|\\n | **MNIST** | 0.6307 | 0.6630 | 0.5951 |\\n | **CIFAR-10** | 0.5049 | 0.4615 | 0.5166 |\\n\\n Since sampling occurs over multiple rounds, we calculated the average values. The absolute value of Pearson Correlation closer to 1 indicates a higher correlation, while values closer to 0 indicate no correlation. The results indicate that the impact of spatial distortion caused by dimensionality reduction is relatively small.\\n\\n2. **Local Structure Preservation** [3]:\\n We trained and tested an SVM on the same training data as in each round of the AL process, but used the corresponding dimensionality-reduced 2D features for visualization instead of the original data, and compared its accuracy with that of the model in the AL process trained on the original data. Additionally, we evaluated the results of a 1-NN classifier on the 2D test set. The results are as follows:\\n#### - Accuracy:\\n | Round | AL RandomSampling (CIFAR-10) | SVM | 1-NN |\\n |-------|-------------------------------|---------|---------|\\n | 0 | 0.3399 | 0.4186 | 0.6467 |\\n | 1 | 0.5191 | 0.8096 | 0.8817 |\\n | 2 | 0.6722 | 0.8931 | 0.8768 |\\n | 3 | 0.7401 | 0.9080 | 0.8897 |\\n | 4 | 0.7797 | 0.9269 | 0.9044 |\\n | 5 | 0.8167 | 0.9246 | 0.8985 |\\n | 6 | 0.8235 | 0.9288 | 0.9182 |\\n | 7 | 0.8434 | 0.9320 | 0.9259 |\\n | 8 | 0.8682 | 0.9414 | 0.9253 |\\n | 9 | 0.8814 | 0.9411 | 0.9283 |\\n | 10 | 0.8893 | 0.9380 | 0.9388 |\\n | 11 | 0.8895 | 0.9499 | 0.9379 |\\n\\n It can be observed that both SVM and 1-NN achieve better performance compared to the AL process, indicating that the dimensionality reduction method effectively preserves the local structural characteristics of the original data. More experimental results will be included in the appendix.\\n\\n#### References\\n\\n[1] Smyth, Barry, Mark Mullins, and Elizabeth McKenna. \\\"Picture Perfect: Visualisation Techniques for Case-based Reasoning.\\\" *ECAI*. 2000. \\n[2] Namee, Brian Mac, and Sarah Jane Delany. \\\"CBTV: Visualising case bases for similarity measure design and selection.\\\" *International conference on case-based reasoning*. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. \\n[3] Huang, Haiyang, et al. \\\"Towards a comprehensive evaluation of dimension reduction methods for transcriptomic data visualization.\\\" *Communications Biology* 5.1 (2022): 719.\\n\\n### **Q5: Presentation of visualization results**\\n\\nWe sincerely thank you once again for your suggestions regarding the presentation of visualization results. We will include more high-resolution figures in the appendix and revise the way figures are presented in the main text.\\n\\nThis paper primarily aims to leverage visualization to explore the sampling behavior characteristics of some classical query strategies. Based on this, developing a visualization tool that enables interactive and dynamic observation of AL is our next step.\"}",
"{\"summary\": \"This paper presents a Voronoi tessellation-based method to visualize decision boundaries in active learning. By highlighting ridges between cells with differing predictions, this method provides a clearer view of the decision boundaries. In addition, it introduces a ridge confidence metric to quantify prediction uncertainty. Experiments on MNIST and CIFAR-10 illustrate the effectiveness of this approach for analyzing various active learning strategies, providing insights into the behavior and effectiveness of different sampling techniques.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Comprehensive Comparison of Active Learning Strategies: The evaluation includes a systematic comparison of various active learning strategies. These potentially contribute to the broader discourse on active learning methods and offer practical guidance for future research in this area.\\n2. Effective Use of Visualization Techniques: The paper incorporates visualization methods to facilitate the analysis of data and models. This makes the analysis and findings more accessible and intuitive.\", \"weaknesses\": \"1. **Concerns about Decision Boundary Generation**. From the figures, the projection results and the Voronoi tesselation results appear to be fixed across multiple rounds. However, as feature vectors update during training, the projection results and the decision boundaries should also be updated. It is essential to clarify how well these fixed tessellations with the updated predictions capture the model's behavior. In addition, the use of 2D projections to represent high-dimensional decision boundaries raises concerns, as results can vary significantly based on the selected projection method and parameters.\\n2. **Insufficient evaluation**. While this paper compares different active learning strategies and summarizes some insights, this evaluation is not sufficient and rigorous. On the one hand, the evaluation is only conducted on MNIST and CIFAR-10, which only contain a few classes with substantial differences between them. It remains uncertain how the proposed method performs with more classes or finer distinctions. On the other hand, the evaluation of boundary effectiveness is inadequate. Many insights, such as oversampling on the boundary region in the uncertainty-based methods, can be identified from scatterplots alone, making the Voronoi approach seem unnecessary.\\n3. **Omission of Relevant Literature**. The paper overlooks significant works that utilize Voronoi tessellation for visualizing sample boundaries. For instance, Chen et al. [*] use it to visualize the samples and boundaries in the analysis of label propagation. Including such references would enhance the contextual foundation of the research.\\n[*] Interactive Graph Construction for Graph-Based Semi-Supervised Learning. TVCG 2021.\", \"questions\": \"1. Decision Boundary Representation: How do the fixed Voronoi tessellation results relate to the updated feature vectors during training? Can you explain how this approach ensures an accurate representation of the model's decision boundaries throughout the training process?\\n2. Generalizability: How does the method work on more complicated datasets, such as ImageNet (with more classes) and CUB-200-2011 (fine-grained classification tasks)\\n2. Boundary Effectiveness: Can you elaborate on how the Voronoi tessellation approach adds value beyond insights obtainable from scatterplots? What specific contributions does it provide in terms of understanding boundary effectiveness that traditional methods do not?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"We sincerely appreciate the reviewer **rwsQ** for the comment.\\n\\n### **Q1: How do the fixed Voronoi tessellation results relate to the updated feature vectors during training?**\\n\\nThere is a misunderstanding regarding the features we used to generate the figures. In lines 208-232 of the paper, we describe the features used to generate the Voronoi diagram. In fact, we have two ways of constructing a 2D feature map:\\n\\n1. **Dynamically Updated Features**: These features are dynamically updated during active learning (e.g., Figure 2). They are extracted by the trained model at each round on the pool data and test data, with the dimensionality-reduced results serving as Voronoi points for constructing our visualization.\\n\\n2. **Fixed Features for Posterior Analysis**: These features are used for posterior analysis (e.g., Figures 5, 6, and 7). Because we need to observe and compare the sampling behaviors and results of different query strategies, we focus on analyzing the confidence decision boundary, sampling points (newly queried points), and the impact of new samples on model training (error detection) using a fixed feature map. This approach is necessary due to differences introduced by the features extracted by models from different rounds and further variations caused by different dimensionality reduction methods.\\n\\nThus, the second method involves training a **visualization model** (described in Section 3.1.3) with all the pool data to help generate fixed features, which are then used for the feature map in every round. The reason for using the visualization model is explained in lines 211-215 of the paper.\\n\\n\\n### **Q2: How does the method work on more complicated datasets?**\\n\\nWe believe our method can works well on large-scale datasets. If the dataset is exceedingly large, a subset of it can be used for visualization. On a smaller dataset that is independently and identically distributed (i.i.d.) with the larger dataset, the decision boundary of the trained model shows relatively small differences compared to the one trained on the larger dataset. We computed the prediction differences on the test set and the overlap of decision boundaries (formed by predicted ridges) on the test set between models trained on smaller datasets of varying sizes and the **Visualization Model (VM)** trained on the whole pool dataset. The experimental results are as follows:\\n\\n#### - Prediction Differences:\\n\\n| Model trained on | KL Divergence\\u2193 | JS Divergence\\u2193 | Wasserstein Distance\\u2193 |\\n|--------------------------|----------------|----------------|------------------------|\\n| **MNIST (1000 samples)** | 0.1890 | 0.0324 | 0.0067 |\\n| **MNIST (5000 samples)** | 0.0515 | 0.0101 | 0.0028 |\\n| **MNIST (10000 samples)**| 0.0362 | 0.0067 | 0.0019 |\\n| **CIFAR-10 (1000 samples)** | 0.1255 | 0.0318 | 0.0121 |\\n| **CIFAR-10 (5000 samples)** | 0.0371 | 0.0094 | 0.0045 |\\n| **CIFAR-10 (10000 samples)**| 0.0177 | 0.0043 | 0.0025 |\\n\\nThe smaller the values of KL Divergence, JS Divergence, and Wasserstein Distance, the more similar the two predicted probability distributions are.\\n\\n#### - Overlapping Predicted Ridges:\\n\\n\\n| Model trained on | Total Predicted Ridges\\u2193 | Overlapping Predicted Ridges (compared to VM)\\u2191 |\\n|--------------------------|----------------|--------------------------------------|\\n| **CIFAR-10 (1000 samples)** | 3728 | 2587 |\\n| **CIFAR-10 (5000 samples)** | 3479 | 3072 |\\n| **CIFAR-10 (10000 samples)**| 3520 | 3233 |\\n| **CIFAR-10 (40000 samples) VM** | 3426 | 3426 |\\n\\n\\n| Model trained on | Total Predicted Ridges\\u2193 | Overlapping Predicted Ridges (compared to VM)\\u2191 |\\n|--------------------------|----------------|--------------------------------------|\\n| **MNIST (1000 samples)** | 2919 | 1224 |\\n| **MNIST (5000 samples)** | 1936 | 1435 |\\n| **MNIST (10000 samples)**| 1880 | 1518 |\\n| **MNIST (50000 samples) VM** | 1799 | 1799 |\\n\\nAdditionally, we will fill different cells with colors to make it easier to observe the changes in regions. \\n\\nThe real issue is likely to arise when there are many classes. Anything beyond approximately 20 classes becomes very challenging to visualize. To address this, we recommend splitting the task into multiple sub-classification tasks and performing visualization on these smaller subsets.\\n\\nOur approach mainly focuses on exploring the sampling behaviors of classical query strategies and analyzing the impact of newly sampled points on the model\\u2019s training in the subsequent round.\"}",
"{\"comment\": \"I thank the authors for clarifying some of my concerns and improving the manuscript. However, I still disagree with the choice of color encodings for some visualizations:\\n\\n- Using a qualitative color scale for a continuous value neglects the similarity between values.\\n- Introducing intervals for a continuous value produces potential biases as the interval boundaries might change the entire color encoding. The interval boundaries are chosen arbitrarily.\\n\\nI keep to the score of reject.\"}",
"{\"comment\": \"We really appreciate reviewer **rwsQ's** response. We believe your concern is very insightful, especially in the context of low-dimensional visualizations.\\n\\nDifferent dimensionality reduction methods result in varying degrees of information loss, which can cause spatial distortion in our visualization results and, consequently, affect the decision boundary constructed in the reduced dimensions. However, this spatial distortion is an inherent and objective issue, and its impact is limited according to the experimental results. Our visualization method does not rely on any specific dimensionality reduction technique, and exploring the effect caused by different dimensionality reduction methods across various datasets is not the core focus of this work. However, in the revision, we will include an analysis of the effect caused by the dimensionality reduction technique currently employed (t-SNE) in our method.\", \"we_used_two_different_methods_for_evaluation\": \"1. **Correlation Test** [1, 2]:\\n This method evaluates the Pearson correlation between the pairwise similarity of the features extracted by the model and the pairwise distance matrix calculated from the dimensionality-reduced data. The results are shown in the table below:\\n\\n | |Pearson Correlation|\\u2191 | EntropySampling (Avg) | RandomSampling (Avg) | Visualization Model |\\n |--------------------------|-----------------------|-----------------------|---------------------|\\n | **MNIST** | 0.6307 | 0.6630 | 0.5951 |\\n | **CIFAR-10** | 0.5049 | 0.4615 | 0.5166 |\\n\\n Since sampling occurs over multiple rounds, we calculated the average values. The absolute value of Pearson Correlation closer to 1 indicates a higher correlation, while values closer to 0 indicate no correlation. The results indicate that the impact of spatial distortion caused by dimensionality reduction is relatively small.\\n\\n2. **Local Structure Preservation** [3]:\\n We trained and tested an SVM on the same training data as in each round of the AL process, but used the corresponding dimensionality-reduced 2D features for visualization instead of the original data, and compared its accuracy with that of the model in the AL process trained on the original data. Additionally, we evaluated the results of a 1-NN classifier on the 2D test set. The results are as follows:\\n#### - Accuracy:\\n | Round | AL RandomSampling (CIFAR-10) | SVM | 1-NN |\\n |-------|-------------------------------|---------|---------|\\n | 0 | 0.3399 | 0.4186 | 0.6467 |\\n | 1 | 0.5191 | 0.8096 | 0.8817 |\\n | 2 | 0.6722 | 0.8931 | 0.8768 |\\n | 3 | 0.7401 | 0.9080 | 0.8897 |\\n | 4 | 0.7797 | 0.9269 | 0.9044 |\\n | 5 | 0.8167 | 0.9246 | 0.8985 |\\n | 6 | 0.8235 | 0.9288 | 0.9182 |\\n | 7 | 0.8434 | 0.9320 | 0.9259 |\\n | 8 | 0.8682 | 0.9414 | 0.9253 |\\n | 9 | 0.8814 | 0.9411 | 0.9283 |\\n | 10 | 0.8893 | 0.9380 | 0.9388 |\\n | 11 | 0.8895 | 0.9499 | 0.9379 |\\n\\n It can be observed that both SVM and 1-NN achieve better performance compared to the AL process, indicating that the dimensionality reduction method effectively preserves the local structural characteristics of the original data. More experimental results will be included in the appendix.\\n\\n#### References\\n\\n[1] Smyth, Barry, Mark Mullins, and Elizabeth McKenna. \\\"Picture Perfect: Visualisation Techniques for Case-based Reasoning.\\\" *ECAI*. 2000. \\n[2] Namee, Brian Mac, and Sarah Jane Delany. \\\"CBTV: Visualising case bases for similarity measure design and selection.\\\" *International conference on case-based reasoning*. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. \\n[3] Huang, Haiyang, et al. \\\"Towards a comprehensive evaluation of dimension reduction methods for transcriptomic data visualization.\\\" *Communications Biology* 5.1 (2022): 719.\"}"
]
} |
12iSWNLDzj | Text To Stealthy Adversarial Face Masks | [
"Ben Lewis",
"Thomas Moyse",
"James Parkinson",
"Elizabeth Telford",
"Callum Whitfield",
"Ranko Lazic"
] | Recent studies have demonstrated that modern facial recognition systems, which are based on deep neural networks, are vulnerable to adversarial attacks, including the use of accessories, makeup patterns, or precision lighting. However, developing attacks that are both robust (resilient to changes in viewing angles and environmental conditions) and stealthy (do not attract suspicion by, for example, incorporating obvious facial features) remains a significant challenge. In this context, we introduce a novel diffusion-based method (DAFR) capable of generating robust and stealthy face masks for dodging recognition systems (where the system fails to identify the attacker). Specifically our approach is capable of producing high-fidelity printable textures using the guidance of textual prompts to determine the style. This method can also be adapted for impersonation purposes, where the system misidentifies the attacker as a specific other individual. Finally, we address a gap in the existing literature by presenting a comprehensive benchmark (FAAB) for evaluating adversarial accessories in three dimensions, assessing their robustness and stealthiness. | [
"facial recognition",
"adversarial accessories",
"diffusion models",
"adversarial benchmarks"
] | https://openreview.net/pdf?id=12iSWNLDzj | https://openreview.net/forum?id=12iSWNLDzj | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"mP2iPwzDYG",
"bfEQuLDFIN",
"OOy3N7pKJd",
"J3DcjDDL7Z",
"CKp8UpqU2O"
],
"note_type": [
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1731515764303,
1730705179731,
1730626834768,
1730655019760,
1730417276045
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3229/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3229/Reviewer_9QYn"
],
[
"ICLR.cc/2025/Conference/Submission3229/Reviewer_NidS"
],
[
"ICLR.cc/2025/Conference/Submission3229/Reviewer_QP84"
],
[
"ICLR.cc/2025/Conference/Submission3229/Reviewer_aYrd"
]
],
"structured_content_str": [
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their work.\"}",
"{\"summary\": \"This paper focuses on adversarial attacks on face recognition models using face masks, claiming that existing methods lack in attack stealthiness. It proposes a diffusion-based adversarial face mask generation method, titled DAFR. DAFR controls the generation of the diffusion model using adversarial guidance. Additionally, this paper builds a benchmark to evaluate the performance of existing adversarial accessories.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper addresses an important issue, namely the security of face recognition models, which is meaningful for both academia and industry.\\n2. The paper leverages the ability of diffusion models to generate realistic and natural images to create adversarial face masks, which is an meaningful design.\", \"weaknesses\": \"1. Lack of real-world attack experiments. This paper focuses on the stealthiness of adversarial attacks; however, discussing stealthiness without considering real-world attack implementation seems meaningless. For instance, digital adversarial attack methods can use subtle perturbations undetectable to the human eye, achieving stealthy attacks. The reason existing methods lack stealthiness is due to the need for higher perturbation intensity to achieve real-world attacks. Discussing attack stealthiness without addressing real-world implementation is thus unconvincing.\\n2. Evaluation of stealthiness. Stealthiness is a subjective evaluation dimension, and it is unclear if the quantitative metric CMMD used in this paper matches human perception. I suggest adding a user study to support the paper's claims of improved stealthiness.\\n3. Lack of ablation experiments. There is insufficient analysis of the effectiveness of key components. For example, to improve robustness, the authors optimize the adversarial pattern on a set of images of the attacker, \\ud835\\udc3b; however, it remains unclear if this design actually improves robustness.\\n4. Missing essential references: Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition, CVPR 2023\", \"questions\": \"1. It is recommended that the authors create an adversarial mask to test the attack effectiveness of DAFR in the real world.\\n2. It is suggested that the authors add an ablation study of \\ud835\\udc3b: conduct an experiment comparing performance with and without optimizing over multiple images of the attacker.\\n3. Does text prompt have an effect on the results?\\n4. This paper should include a framework diagram to illustrate the DAFR method, which would enhance readability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes to use diffusion models to generate adversarial accessories (a mask) for attacking facial recognition. Two important attack properties, i.e., robustness (resilient to changes in viewing angles and environmental conditions) and stealthiness (do not attract suspicion by, for example, incorporating obvious facial features), are considered.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Applying text-to-image diffusion models to optimizing adversarial accessories is new.\\n2. Experiments are conducted on multiple datasets, models, text-guided styles, and with various metrics.\\n3. The proposed attack outperforms the baselines regarding stealthiness. \\n4. The attempt to form a benchmark is promising.\", \"weaknesses\": [\"This paper has claimed to improve the attack in both robustness and stealthiness. However, the evaluations in the main body are limited to the stealthiness (and the attack strength in the common 2D digital setting). Although some robustness results are added to the Appendix (Table 6), those results show that the proposed attack is worse than the baselines. In addition, no physical, 3D experiments are considered.\", \"As acknowledged by the authors, the idea of using diffusion models to generate adversarial examples is not new. However, it is not clearly stated what the specific (technical) challenge is for generating adversarial masks, compared to other forms of perturbations, such as the makeups. Without identifying such challenges, the technical novelty of this paper is not clear.\", \"It is interesting to report the results for different text prompts. However, since different prompts lead to dramatically different attack performance (see Table 4), it would be necessary to understand the relation between the prompt and the performance and finally learn to select the optimal one.\", \"Presentation issues:\", \"This paper contains lots of technical details but lacks an overview of the whole attack pipeline (maybe in the form of a flowchart).\", \"Before introducing their method, the paper should include foundational knowledge related to adversarial masks and clarify the meanings of the mathematical symbols. For instance, the meanings of x and I in Algorithm 1 are not defined.\"], \"questions\": \"How does DAFR perform in the physical world (3D robustness) and against black-box models (attack transferability)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a method to generate adversarial face masks that can fool face recognition systems in a white box setting. They specifically borrow the adversarial attack framework from AdvDiff.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"It is an interesting work on a relevant problem. The method performs well over previous baselines and including 3D rendering bridges the gap towards real world applicability.\", \"weaknesses\": \"1. The attack is a simple adaptation of previous approaches i.e. AdvDiff.\\n2. What is R in the Algorithm 1? Should it be $h_i$ instead of $h_n$? \\n3. The generation process is firstly done in a white-box setting, while this in itself is not problematic the authors have not included any results on transferability to see whether it can be used for another model? \\n4. The attack is not robust against state-of-the-art facial recognition models such as ArcFace. The stealth to attack success rate trade-off for R100 is quite large. SASMask achieves almost the same T-CMMD and M-CMMD scores with significantly higher SR. \\n5. Line 208: Which dataset was the ArcFace model trained on? \\n6. The test set consisting of 300 images is not statistically significant. \\n7. A white mask has a SR 1000 of 0.7083 against the fine-tuned F100 model. Is this scenario even worth studying? This simply means that the model does not perform well to begin with. \\n8. Is the attack success impacted by the facial pose?\", \"questions\": \"See weaknesses section.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors propose Diffusion Attack against Facial Recognition (DAFR), a method capable of generating robust and stealthy face masks for dodging recognition systems using the guidance of textual prompts to determine the style. A new benchmark is also presented, the Face Accessory Attack Benchmark (FAAB) which includes a set of standardized tests and procedures to evaluate the performance of accessories.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The literature review is thorough and effectively encompasses key works relevant to the field.\", \"weaknesses\": [\"1. Motivation: While I recognize the significance of prior works highlighting face masks as a potential threat vector during the COVID-19 pandemic, their prevalence has markedly declined since then. As a result, I find it challenging to accept the notion that face masks, irrespective of printed patterns, can currently be considered a genuinely stealthy accessory.\", \"2. Writing quality: the submission is mainly held back by the writing quality and lack of clarity in most sections. These are mainly focused around the method section (Section 2), which seems a bit unprofessional, lacking formulaic descriptions of relevant preliminaries and the attack\\u2019s full pipeline (e.g., how the components interact with each other) and overall extremely unorganized. I suggest the authors reorganize (e.g., split the section into subsections, each describing the main steps of the attack) and improve (formalize the entire pipeline) this section for better clarity of the novelties they propose.\", \"3. Novelty: The proposed method appears to lack substantial originality, as it primarily combines existing approaches adapted to this specific task, without introducing significant additional contributions. For instance, the mask rendering technique for facial images is adopted from Zolfi et al. (2022), while the diffusion-based adversarial attack approach is drawn from Dai et al. (2023), with adjustments made for the face recognition domain. Although the authors mention the use of textual prompts to control style, the method section lacks a clear methodological explanation of this aspect, including details on how prompts are selected and their impact on the attack's objectives.\", \"4. Evaluation: While the authors assess their attack against various baselines and across multiple models and datasets, several critical aspects remain unaddressed:\", \"Shallow Analysis \\u2013 The authors predominantly present empirical results without delving into deeper insights, examining edge cases, or discussing unexpected findings.\", \"Missing Ablation Studies \\u2013 For instance, Equation 1 claims that a scaling function is superior to a static value, yet no comparative analysis is provided to validate this assertion.\", \"Lack of Transferability Experiments \\u2013 A crucial aspect of adversarial attacks is their transferability to models beyond those on which they were trained. Testing transferability could offer valuable insights into the practicality of the proposed attack.\", \"Absence of Real-World Experiments \\u2013 Although digital evaluations form the core of the paper, an attack using a practical accessory would benefit from real-world testing to demonstrate efficacy beyond digital scenarios.\", \"Implementation Details \\u2013 The section includes an excess of low-level details (specific steps for each decision), which detracts from the key information. I recommend prioritizing content between the main paper and the appendix, allowing for more space for additional experiments, such as those in Sections D, E, and F of the appendix.\", \"Results \\u2013 The results across most models do not consistently outperform the baselines (notably SASMask). Ideally, the proposed method should exhibit at least comparable performance to baselines while enhancing stealthiness. The current configuration seems to prioritize stealthiness at the expense of attack success.\"], \"minor_comments\": \"1. Line 89 \\u2013 CMMD is not introduced until line 446 (not even a reference to it).\\n2. Line 125 \\u2013 unclear why $f$ is mentioned here.\\n3. Line 137 \\u2013 unclear what $R$ is.\\n4. Line 301 \\u2013 table 3 is too far from where it was mentioned in the text. Maybe the table could be split to better fit in the flow of the text.\", \"questions\": \"Please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
12gMsxpu4G | OCS+: Improving PTQ with Outlier Translation | [
"Qiulin Zhang",
"Zhaojing Wen",
"Yuan Zhang",
"Rudan Chen",
"Xichao Yang",
"Di Xie",
"Jiang Zhu"
] | Post-training quantization (PTQ) is an effective technique for accelerating DNN model inference, where activations typically follow a bell-shaped distribution. Since commodity hardware employs a linear quantization grid and limited quantization levels, prior PTQs optimize a clipping threshold to minimize overall quantization error, which excludes outliers from the bell-shaped data. However, outliers are non-trivial for low-bit and lightweight models. Thus OCS (Zhao et al.,2019) proposed to save outliers by halving and duplicating. However, in activation quantization, the original OCS sacrifices the precision of the regular inliers, leading to severe accuracy degradation. To address this, we propose OCS+ to save outlier activation without affecting the regular inliers. Consequently, OCS+ theoretically achieves one-bit higher representation under the predefined bitwidth hardware. OCS+ is based on offline mathematical transformation, thus it does not require additional training or re-design works on hardware. Experiments over CNNs and ViTs demonstrate OCS+ significantly outperforms OCS and help improve current PTQ SOTAs, e.g., OCS+ improves the current SOTAs by 12.73\% in Acc@1 for W2A2 MobileNet-v2. The code will be released. | [
"Post Training Quantization"
] | Reject | https://openreview.net/pdf?id=12gMsxpu4G | https://openreview.net/forum?id=12gMsxpu4G | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wao1gV45MR",
"ifUwSLC7Od",
"i3DmdIiTDG",
"1gd3im15x4",
"1FkA6d5Ryy"
],
"note_type": [
"official_review",
"official_review",
"meta_review",
"decision",
"official_review"
],
"note_created": [
1730611305827,
1730189911711,
1734697016172,
1737523631672,
1730451545063
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4300/Reviewer_jQCD"
],
[
"ICLR.cc/2025/Conference/Submission4300/Reviewer_QVvG"
],
[
"ICLR.cc/2025/Conference/Submission4300/Area_Chair_UZgm"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4300/Reviewer_Rcu4"
]
],
"structured_content_str": [
"{\"summary\": \"Outliers while quantization are one of the representative elements that harm the performance of neural networks.\\nWhat makes things worse is that target hardware is usually fixed and hard to change settings such as bit precision.\\nThus, if the outlier problem becomes severe and the performance of the quantized model deteriorates, it may be difficult to resolve the problem by using a higher bit.\\n\\nThere already exists a prior work that tries to resolve this problem which is called OCS.\\nBy halving the activation values of outlier channels and duplicating those channels as additional channels, OCS successfully alleviates the outlier problem.\\nHowever, OCS causes another problem; rounding error of inliers.\\n\\nTo mitigate both problems (clipping error of outliers and rounding error of inliers) simultaneously, the paper proposes OCS+.\\nThe paper adopts translation instead of halving operation.\\nBy doing so, it achieves the same functional effect equivalent to using one more bit with moderate computational overhead.\\nWith various experiments, the paper shows that OCS+ outperforms other previous works, even with the same number of channels added by OCS+.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper highlights the problem of previous work, OCS.\", \"The paper shows performance gain compared to OCS.\", \"Experimental design and multifaceted analysis of the proposal are commendable.\", \"Under the situations that the paper assumes (e.g., the target hardware and bit settings are fixed), significant performance improvements are expected with additional computation overhead.\"], \"weaknesses\": [\"Several problems that OCS already has can be the same problems of OCS+.\", \"The computational overhead due to additional channels\", \"The purpose of quantization is to run a large model on limited resources. Therefore, additional computational overhead induced by OCS+ has a worse impact on hardware that is hard to adjust bit precision, which is the target of OCS+.\", \"The proportion of channels suffering the outlier problem and outlier channel ID can differ according to inputs. Analysis of channel sensitivity with different inputs can be a good experiment.\"], \"questions\": \"In Table 1, are those 3 results attained from the same weight parameters and clipping range? Otherwise, are the clipping ranges of the results different?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduced OCS+, a PTQ for solving the outlier in the quantization process. First, this paper demonstrated that outliers are non-trivial. Motivated by OCS, the original version of this paper, OCS+ is introduced. OCS+ duplicates the important (outlier) activation channel and corresponding weight. Thus, OCS+ achieves high performance with the costs of more computation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. This paper provided comprehensive experimental results.\\n\\n2. This paper has clear logic that makes their method easy to understand.\", \"weaknesses\": \"1. Lack of necessary fair comparison. Actually, OCS+ introduces higher computation costs since it increases the weight and activations by 50%. It added 50% extra computation. This makes OCS+ less attractive since the compared methods do not incur extra computation. Also, some related descriptions such as the analysis of FLOP (compared with previous methods)are lacking in the paper.\\n\\n2. How to select the important activation channels missed in the paper?\\n\\n3. The implementation details of Table 7 are missing. What are the platform and running software?\\n\\n4. Running time comparison with previous methods that do not incur extra computation such as BRECQ?\\n\\n5. According to the paper, the important activation channels seem to be selected on the fly. Does this operation incur more inference time costs?\", \"questions\": \"I am wondering what the point of extremely low-bit quantization is, such as 2-bit. Does this extremely low-bit make any practical use? Could the author provide some insight into this?\\n\\nI currently turn to reject this paper for its novelty and experiment comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"Although this work aims at addressing the outlier activation preservation issue, the drawbacks related to computational cost, limited novelty, and lack of comprehensive comparisons and essential details outweigh its contribution. All reviewers gave negative scores and there is no response to their concerns. Therefore, this work is recommended to be rejected.\", \"additional_comments_on_reviewer_discussion\": \"There were no discussions between reviewers and authors because authors did not respond to comments from reviewers.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper proposes a post-training quantization method named OCS+, which aimed at saving outlier activation without affecting the regular inliers. Based on the offline transformation with weight and activation, it does not require additional training. Experimental results show that the proposed method works for both CNN and ViTs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.The proposed OCS+ preserves the outlier activations without sacrificing the precision of regular inliers, which allows for a theoretical increase in presentational power from b-bits to (b+1)-bits under the same hardware constraints.\\n\\n2.OCS+ is based on the offline mathematical transformations, which does not require additional training or hardware re-design. \\n\\n3.Experimental results show OCS+ achieves an improvement over the previous methods both on CNN and ViTs.\", \"weaknesses\": \"1.The presentation need to be improved, such as Page 4.\\n\\n2.The actual speed-up need to be evaluated since OCS+ introduces additional computational costs.\\n\\n3.Some typos: such as \\nLine 93 and Line 99 OCS+-.\\n\\nLine 36 quant parameters\\n\\nLine 253 translate down?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.